2026-02-05 01:21:07.626484 | Job console starting 2026-02-05 01:21:07.640239 | Updating git repos 2026-02-05 01:21:07.727547 | Cloning repos into workspace 2026-02-05 01:21:08.015769 | Restoring repo states 2026-02-05 01:21:08.040811 | Merging changes 2026-02-05 01:21:08.040933 | Checking out repos 2026-02-05 01:21:08.346162 | Preparing playbooks 2026-02-05 01:21:09.068638 | Running Ansible setup 2026-02-05 01:21:13.911529 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-05 01:21:14.662423 | 2026-02-05 01:21:14.662703 | PLAY [Base pre] 2026-02-05 01:21:14.680006 | 2026-02-05 01:21:14.680140 | TASK [Setup log path fact] 2026-02-05 01:21:14.710264 | orchestrator | ok 2026-02-05 01:21:14.727507 | 2026-02-05 01:21:14.727692 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-05 01:21:14.768435 | orchestrator | ok 2026-02-05 01:21:14.780699 | 2026-02-05 01:21:14.780813 | TASK [emit-job-header : Print job information] 2026-02-05 01:21:14.839080 | # Job Information 2026-02-05 01:21:14.839350 | Ansible Version: 2.16.14 2026-02-05 01:21:14.839412 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-05 01:21:14.839470 | Pipeline: periodic-midnight 2026-02-05 01:21:14.839507 | Executor: 521e9411259a 2026-02-05 01:21:14.839542 | Triggered by: https://github.com/osism/testbed 2026-02-05 01:21:14.839603 | Event ID: d2fbb7a4c0254005bfb8ea044578dfa6 2026-02-05 01:21:14.849189 | 2026-02-05 01:21:14.849340 | LOOP [emit-job-header : Print node information] 2026-02-05 01:21:14.985390 | orchestrator | ok: 2026-02-05 01:21:14.985795 | orchestrator | # Node Information 2026-02-05 01:21:14.985849 | orchestrator | Inventory Hostname: orchestrator 2026-02-05 01:21:14.985886 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-05 01:21:14.985919 | orchestrator | Username: zuul-testbed03 2026-02-05 01:21:14.985949 | orchestrator | Distro: Debian 12.13 2026-02-05 01:21:14.985984 | orchestrator | Provider: static-testbed 2026-02-05 01:21:14.986015 | orchestrator | Region: 2026-02-05 01:21:14.986046 | orchestrator | Label: testbed-orchestrator 2026-02-05 01:21:14.986075 | orchestrator | Product Name: OpenStack Nova 2026-02-05 01:21:14.986104 | orchestrator | Interface IP: 81.163.193.140 2026-02-05 01:21:15.014173 | 2026-02-05 01:21:15.014358 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-05 01:21:15.557722 | orchestrator -> localhost | changed 2026-02-05 01:21:15.575750 | 2026-02-05 01:21:15.575942 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-05 01:21:16.688679 | orchestrator -> localhost | changed 2026-02-05 01:21:16.717291 | 2026-02-05 01:21:16.717526 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-05 01:21:17.037663 | orchestrator -> localhost | ok 2026-02-05 01:21:17.046007 | 2026-02-05 01:21:17.046156 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-05 01:21:17.090642 | orchestrator | ok 2026-02-05 01:21:17.109815 | orchestrator | included: /var/lib/zuul/builds/dfd18b6ff29c46d7a0487cf75b178ce7/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-05 01:21:17.118355 | 2026-02-05 01:21:17.118458 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-05 01:21:20.174511 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-05 01:21:20.174785 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/dfd18b6ff29c46d7a0487cf75b178ce7/work/dfd18b6ff29c46d7a0487cf75b178ce7_id_rsa 2026-02-05 01:21:20.174855 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/dfd18b6ff29c46d7a0487cf75b178ce7/work/dfd18b6ff29c46d7a0487cf75b178ce7_id_rsa.pub 2026-02-05 01:21:20.174887 | orchestrator -> localhost | The key fingerprint is: 2026-02-05 01:21:20.174913 | orchestrator -> localhost | SHA256:4nMm5rYhHlVAAaMx7orOmh+kjQoxb8sAfS0/fS7m6hw zuul-build-sshkey 2026-02-05 01:21:20.174937 | orchestrator -> localhost | The key's randomart image is: 2026-02-05 01:21:20.174972 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-05 01:21:20.174996 | orchestrator -> localhost | | o oo+. | 2026-02-05 01:21:20.175019 | orchestrator -> localhost | | . + . . | 2026-02-05 01:21:20.175042 | orchestrator -> localhost | | o . | 2026-02-05 01:21:20.175063 | orchestrator -> localhost | | o . . | 2026-02-05 01:21:20.175084 | orchestrator -> localhost | |+ + o + S | 2026-02-05 01:21:20.175112 | orchestrator -> localhost | |oX . = o | 2026-02-05 01:21:20.175133 | orchestrator -> localhost | |B = o E + . | 2026-02-05 01:21:20.175153 | orchestrator -> localhost | |== + *.Ooo | 2026-02-05 01:21:20.175174 | orchestrator -> localhost | |=++ .o*=... | 2026-02-05 01:21:20.175195 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-05 01:21:20.175249 | orchestrator -> localhost | ok: Runtime: 0:00:02.528507 2026-02-05 01:21:20.183622 | 2026-02-05 01:21:20.183732 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-05 01:21:20.214132 | orchestrator | ok 2026-02-05 01:21:20.224208 | orchestrator | included: /var/lib/zuul/builds/dfd18b6ff29c46d7a0487cf75b178ce7/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-05 01:21:20.233336 | 2026-02-05 01:21:20.233437 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-05 01:21:20.257625 | orchestrator | skipping: Conditional result was False 2026-02-05 01:21:20.265302 | 2026-02-05 01:21:20.265402 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-05 01:21:20.893965 | orchestrator | changed 2026-02-05 01:21:20.904621 | 2026-02-05 01:21:20.904770 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-05 01:21:21.215684 | orchestrator | ok 2026-02-05 01:21:21.224310 | 2026-02-05 01:21:21.224452 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-05 01:21:21.677836 | orchestrator | ok 2026-02-05 01:21:21.686089 | 2026-02-05 01:21:21.686255 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-05 01:21:22.141615 | orchestrator | ok 2026-02-05 01:21:22.150333 | 2026-02-05 01:21:22.150462 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-05 01:21:22.186086 | orchestrator | skipping: Conditional result was False 2026-02-05 01:21:22.195352 | 2026-02-05 01:21:22.195477 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-05 01:21:22.717148 | orchestrator -> localhost | changed 2026-02-05 01:21:22.738335 | 2026-02-05 01:21:22.738483 | TASK [add-build-sshkey : Add back temp key] 2026-02-05 01:21:23.090095 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/dfd18b6ff29c46d7a0487cf75b178ce7/work/dfd18b6ff29c46d7a0487cf75b178ce7_id_rsa (zuul-build-sshkey) 2026-02-05 01:21:23.090894 | orchestrator -> localhost | ok: Runtime: 0:00:00.019336 2026-02-05 01:21:23.106936 | 2026-02-05 01:21:23.107108 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-05 01:21:23.573670 | orchestrator | ok 2026-02-05 01:21:23.583327 | 2026-02-05 01:21:23.583469 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-05 01:21:23.618472 | orchestrator | skipping: Conditional result was False 2026-02-05 01:21:23.681720 | 2026-02-05 01:21:23.681879 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-05 01:21:24.157398 | orchestrator | ok 2026-02-05 01:21:24.174881 | 2026-02-05 01:21:24.175047 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-05 01:21:24.224274 | orchestrator | ok 2026-02-05 01:21:24.235738 | 2026-02-05 01:21:24.235878 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-05 01:21:24.545937 | orchestrator -> localhost | ok 2026-02-05 01:21:24.562401 | 2026-02-05 01:21:24.562618 | TASK [validate-host : Collect information about the host] 2026-02-05 01:21:25.847123 | orchestrator | ok 2026-02-05 01:21:25.864120 | 2026-02-05 01:21:25.864243 | TASK [validate-host : Sanitize hostname] 2026-02-05 01:21:25.950031 | orchestrator | ok 2026-02-05 01:21:25.958388 | 2026-02-05 01:21:25.958524 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-05 01:21:26.574419 | orchestrator -> localhost | changed 2026-02-05 01:21:26.588106 | 2026-02-05 01:21:26.588252 | TASK [validate-host : Collect information about zuul worker] 2026-02-05 01:21:27.054791 | orchestrator | ok 2026-02-05 01:21:27.062427 | 2026-02-05 01:21:27.062577 | TASK [validate-host : Write out all zuul information for each host] 2026-02-05 01:21:27.648220 | orchestrator -> localhost | changed 2026-02-05 01:21:27.668035 | 2026-02-05 01:21:27.668206 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-05 01:21:27.969396 | orchestrator | ok 2026-02-05 01:21:27.980195 | 2026-02-05 01:21:27.980336 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-05 01:21:52.505748 | orchestrator | changed: 2026-02-05 01:21:52.506116 | orchestrator | .d..t...... src/ 2026-02-05 01:21:52.506163 | orchestrator | .d..t...... src/github.com/ 2026-02-05 01:21:52.506194 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-05 01:21:52.506223 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-05 01:21:52.506249 | orchestrator | RedHat.yml 2026-02-05 01:21:52.521478 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-05 01:21:52.521495 | orchestrator | RedHat.yml 2026-02-05 01:21:52.521546 | orchestrator | = 1.53.0"... 2026-02-05 01:22:04.189631 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-05 01:22:04.683646 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-05 01:22:05.406445 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-05 01:22:05.779255 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-05 01:22:06.279337 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-05 01:22:06.392560 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-05 01:22:07.367291 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-05 01:22:07.367359 | orchestrator | 2026-02-05 01:22:07.367367 | orchestrator | Providers are signed by their developers. 2026-02-05 01:22:07.367373 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-05 01:22:07.367379 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-05 01:22:07.367387 | orchestrator | 2026-02-05 01:22:07.367436 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-05 01:22:07.367459 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-05 01:22:07.367464 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-05 01:22:07.367469 | orchestrator | you run "tofu init" in the future. 2026-02-05 01:22:07.367714 | orchestrator | 2026-02-05 01:22:07.367723 | orchestrator | OpenTofu has been successfully initialized! 2026-02-05 01:22:07.367730 | orchestrator | 2026-02-05 01:22:07.367734 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-05 01:22:07.367739 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-05 01:22:07.367745 | orchestrator | should now work. 2026-02-05 01:22:07.367749 | orchestrator | 2026-02-05 01:22:07.367754 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-05 01:22:07.367759 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-05 01:22:07.367769 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-05 01:22:07.583015 | orchestrator | Created and switched to workspace "ci"! 2026-02-05 01:22:07.583111 | orchestrator | 2026-02-05 01:22:07.583125 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-05 01:22:07.583136 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-05 01:22:07.583146 | orchestrator | for this configuration. 2026-02-05 01:22:07.736787 | orchestrator | ci.auto.tfvars 2026-02-05 01:22:07.744224 | orchestrator | default_custom.tf 2026-02-05 01:22:08.673328 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-05 01:22:09.196081 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-05 01:22:09.465563 | orchestrator | 2026-02-05 01:22:09.465641 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-05 01:22:09.465650 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-05 01:22:09.466544 | orchestrator | + create 2026-02-05 01:22:09.467493 | orchestrator | <= read (data resources) 2026-02-05 01:22:09.467529 | orchestrator | 2026-02-05 01:22:09.467536 | orchestrator | OpenTofu will perform the following actions: 2026-02-05 01:22:09.469185 | orchestrator | 2026-02-05 01:22:09.469221 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-05 01:22:09.469231 | orchestrator | # (config refers to values not yet known) 2026-02-05 01:22:09.469237 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-05 01:22:09.469243 | orchestrator | + checksum = (known after apply) 2026-02-05 01:22:09.469249 | orchestrator | + created_at = (known after apply) 2026-02-05 01:22:09.469255 | orchestrator | + file = (known after apply) 2026-02-05 01:22:09.469261 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.469288 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.469296 | orchestrator | + min_disk_gb = (known after apply) 2026-02-05 01:22:09.469305 | orchestrator | + min_ram_mb = (known after apply) 2026-02-05 01:22:09.469315 | orchestrator | + most_recent = true 2026-02-05 01:22:09.469323 | orchestrator | + name = (known after apply) 2026-02-05 01:22:09.469328 | orchestrator | + protected = (known after apply) 2026-02-05 01:22:09.469334 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.469343 | orchestrator | + schema = (known after apply) 2026-02-05 01:22:09.469348 | orchestrator | + size_bytes = (known after apply) 2026-02-05 01:22:09.469353 | orchestrator | + tags = (known after apply) 2026-02-05 01:22:09.469359 | orchestrator | + updated_at = (known after apply) 2026-02-05 01:22:09.469365 | orchestrator | } 2026-02-05 01:22:09.469535 | orchestrator | 2026-02-05 01:22:09.469556 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-05 01:22:09.469563 | orchestrator | # (config refers to values not yet known) 2026-02-05 01:22:09.469569 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-05 01:22:09.469574 | orchestrator | + checksum = (known after apply) 2026-02-05 01:22:09.469580 | orchestrator | + created_at = (known after apply) 2026-02-05 01:22:09.469585 | orchestrator | + file = (known after apply) 2026-02-05 01:22:09.469591 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.469596 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.469602 | orchestrator | + min_disk_gb = (known after apply) 2026-02-05 01:22:09.469607 | orchestrator | + min_ram_mb = (known after apply) 2026-02-05 01:22:09.469613 | orchestrator | + most_recent = true 2026-02-05 01:22:09.469619 | orchestrator | + name = (known after apply) 2026-02-05 01:22:09.469624 | orchestrator | + protected = (known after apply) 2026-02-05 01:22:09.469629 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.469635 | orchestrator | + schema = (known after apply) 2026-02-05 01:22:09.469640 | orchestrator | + size_bytes = (known after apply) 2026-02-05 01:22:09.469646 | orchestrator | + tags = (known after apply) 2026-02-05 01:22:09.469651 | orchestrator | + updated_at = (known after apply) 2026-02-05 01:22:09.469657 | orchestrator | } 2026-02-05 01:22:09.469766 | orchestrator | 2026-02-05 01:22:09.469783 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-05 01:22:09.469790 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-05 01:22:09.469796 | orchestrator | + content = (known after apply) 2026-02-05 01:22:09.469802 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-05 01:22:09.469807 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-05 01:22:09.469813 | orchestrator | + content_md5 = (known after apply) 2026-02-05 01:22:09.469818 | orchestrator | + content_sha1 = (known after apply) 2026-02-05 01:22:09.469824 | orchestrator | + content_sha256 = (known after apply) 2026-02-05 01:22:09.469829 | orchestrator | + content_sha512 = (known after apply) 2026-02-05 01:22:09.469835 | orchestrator | + directory_permission = "0777" 2026-02-05 01:22:09.469840 | orchestrator | + file_permission = "0644" 2026-02-05 01:22:09.469846 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-05 01:22:09.469851 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.469857 | orchestrator | } 2026-02-05 01:22:09.469951 | orchestrator | 2026-02-05 01:22:09.469967 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-05 01:22:09.469974 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-05 01:22:09.469979 | orchestrator | + content = (known after apply) 2026-02-05 01:22:09.469985 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-05 01:22:09.469990 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-05 01:22:09.469995 | orchestrator | + content_md5 = (known after apply) 2026-02-05 01:22:09.470001 | orchestrator | + content_sha1 = (known after apply) 2026-02-05 01:22:09.470006 | orchestrator | + content_sha256 = (known after apply) 2026-02-05 01:22:09.470039 | orchestrator | + content_sha512 = (known after apply) 2026-02-05 01:22:09.470047 | orchestrator | + directory_permission = "0777" 2026-02-05 01:22:09.470053 | orchestrator | + file_permission = "0644" 2026-02-05 01:22:09.470066 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-05 01:22:09.470072 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.470077 | orchestrator | } 2026-02-05 01:22:09.470171 | orchestrator | 2026-02-05 01:22:09.470188 | orchestrator | # local_file.inventory will be created 2026-02-05 01:22:09.470194 | orchestrator | + resource "local_file" "inventory" { 2026-02-05 01:22:09.470200 | orchestrator | + content = (known after apply) 2026-02-05 01:22:09.470205 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-05 01:22:09.470211 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-05 01:22:09.470216 | orchestrator | + content_md5 = (known after apply) 2026-02-05 01:22:09.470222 | orchestrator | + content_sha1 = (known after apply) 2026-02-05 01:22:09.470228 | orchestrator | + content_sha256 = (known after apply) 2026-02-05 01:22:09.470233 | orchestrator | + content_sha512 = (known after apply) 2026-02-05 01:22:09.470239 | orchestrator | + directory_permission = "0777" 2026-02-05 01:22:09.470244 | orchestrator | + file_permission = "0644" 2026-02-05 01:22:09.470250 | orchestrator | + filename = "inventory.ci" 2026-02-05 01:22:09.470255 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.470261 | orchestrator | } 2026-02-05 01:22:09.470355 | orchestrator | 2026-02-05 01:22:09.470372 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-05 01:22:09.470379 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-05 01:22:09.470384 | orchestrator | + content = (sensitive value) 2026-02-05 01:22:09.470406 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-05 01:22:09.470413 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-05 01:22:09.470418 | orchestrator | + content_md5 = (known after apply) 2026-02-05 01:22:09.470423 | orchestrator | + content_sha1 = (known after apply) 2026-02-05 01:22:09.470429 | orchestrator | + content_sha256 = (known after apply) 2026-02-05 01:22:09.470434 | orchestrator | + content_sha512 = (known after apply) 2026-02-05 01:22:09.470440 | orchestrator | + directory_permission = "0700" 2026-02-05 01:22:09.470445 | orchestrator | + file_permission = "0600" 2026-02-05 01:22:09.470451 | orchestrator | + filename = ".id_rsa.ci" 2026-02-05 01:22:09.470456 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.470462 | orchestrator | } 2026-02-05 01:22:09.470494 | orchestrator | 2026-02-05 01:22:09.470511 | orchestrator | # null_resource.node_semaphore will be created 2026-02-05 01:22:09.470518 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-05 01:22:09.470523 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.470529 | orchestrator | } 2026-02-05 01:22:09.470619 | orchestrator | 2026-02-05 01:22:09.470636 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-05 01:22:09.470643 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-05 01:22:09.470649 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.470655 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.470660 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.470666 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.470671 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.470677 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-05 01:22:09.470682 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.470688 | orchestrator | + size = 80 2026-02-05 01:22:09.470693 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.470699 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.470704 | orchestrator | } 2026-02-05 01:22:09.470793 | orchestrator | 2026-02-05 01:22:09.470813 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-05 01:22:09.470823 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 01:22:09.470832 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.470840 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.470848 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.470863 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.470871 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.470880 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-05 01:22:09.470888 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.470897 | orchestrator | + size = 80 2026-02-05 01:22:09.470906 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.470914 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.470923 | orchestrator | } 2026-02-05 01:22:09.471065 | orchestrator | 2026-02-05 01:22:09.471086 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-05 01:22:09.471092 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 01:22:09.471099 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.471104 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.471110 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.471115 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.471121 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.471126 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-05 01:22:09.471132 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.471137 | orchestrator | + size = 80 2026-02-05 01:22:09.471143 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.471148 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.471154 | orchestrator | } 2026-02-05 01:22:09.471285 | orchestrator | 2026-02-05 01:22:09.471305 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-05 01:22:09.471311 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 01:22:09.471317 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.471323 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.471328 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.471334 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.471339 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.471345 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-05 01:22:09.471350 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.471356 | orchestrator | + size = 80 2026-02-05 01:22:09.471369 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.471374 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.471380 | orchestrator | } 2026-02-05 01:22:09.471501 | orchestrator | 2026-02-05 01:22:09.471525 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-05 01:22:09.471534 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 01:22:09.471542 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.471552 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.471561 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.471569 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.471578 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.471587 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-05 01:22:09.471596 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.471605 | orchestrator | + size = 80 2026-02-05 01:22:09.471614 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.471623 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.471632 | orchestrator | } 2026-02-05 01:22:09.471763 | orchestrator | 2026-02-05 01:22:09.471784 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-05 01:22:09.471791 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 01:22:09.471797 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.471802 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.471808 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.471822 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.471828 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.471834 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-05 01:22:09.471840 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.471845 | orchestrator | + size = 80 2026-02-05 01:22:09.471851 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.471856 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.471862 | orchestrator | } 2026-02-05 01:22:09.471953 | orchestrator | 2026-02-05 01:22:09.471970 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-05 01:22:09.471977 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 01:22:09.471982 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.471988 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.471994 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.471999 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.472005 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.472010 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-05 01:22:09.472016 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.472021 | orchestrator | + size = 80 2026-02-05 01:22:09.472027 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.472033 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.472038 | orchestrator | } 2026-02-05 01:22:09.472122 | orchestrator | 2026-02-05 01:22:09.472138 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-05 01:22:09.472145 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 01:22:09.472151 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.472157 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.472162 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.472168 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.472173 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-05 01:22:09.472179 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.472184 | orchestrator | + size = 20 2026-02-05 01:22:09.472190 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.472196 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.472201 | orchestrator | } 2026-02-05 01:22:09.472284 | orchestrator | 2026-02-05 01:22:09.472300 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-05 01:22:09.472306 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 01:22:09.472312 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.472317 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.472323 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.472328 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.472334 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-05 01:22:09.472339 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.472345 | orchestrator | + size = 20 2026-02-05 01:22:09.472350 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.472356 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.472362 | orchestrator | } 2026-02-05 01:22:09.472462 | orchestrator | 2026-02-05 01:22:09.472480 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-05 01:22:09.472487 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 01:22:09.472492 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.472498 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.472503 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.472509 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.472514 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-05 01:22:09.472520 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.472531 | orchestrator | + size = 20 2026-02-05 01:22:09.472537 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.472542 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.472548 | orchestrator | } 2026-02-05 01:22:09.472628 | orchestrator | 2026-02-05 01:22:09.472645 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-05 01:22:09.472651 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 01:22:09.472656 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.472662 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.472667 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.472679 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.472684 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-05 01:22:09.472690 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.472696 | orchestrator | + size = 20 2026-02-05 01:22:09.472701 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.472707 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.472712 | orchestrator | } 2026-02-05 01:22:09.472791 | orchestrator | 2026-02-05 01:22:09.472807 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-05 01:22:09.472814 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 01:22:09.472820 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.472825 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.472831 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.472836 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.472842 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-05 01:22:09.472848 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.472853 | orchestrator | + size = 20 2026-02-05 01:22:09.472859 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.472864 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.472870 | orchestrator | } 2026-02-05 01:22:09.472952 | orchestrator | 2026-02-05 01:22:09.472969 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-05 01:22:09.472975 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 01:22:09.472981 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.472986 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.472992 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.472997 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.473003 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-05 01:22:09.473008 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.473013 | orchestrator | + size = 20 2026-02-05 01:22:09.473019 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.473024 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.473030 | orchestrator | } 2026-02-05 01:22:09.473105 | orchestrator | 2026-02-05 01:22:09.473121 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-05 01:22:09.473127 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 01:22:09.473133 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.473138 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.473144 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.473149 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.473155 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-05 01:22:09.473160 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.473166 | orchestrator | + size = 20 2026-02-05 01:22:09.473171 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.473177 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.473182 | orchestrator | } 2026-02-05 01:22:09.473262 | orchestrator | 2026-02-05 01:22:09.473279 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-05 01:22:09.473285 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 01:22:09.473296 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.473301 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.473307 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.473312 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.473318 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-05 01:22:09.473323 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.473329 | orchestrator | + size = 20 2026-02-05 01:22:09.473334 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.473339 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.473345 | orchestrator | } 2026-02-05 01:22:09.473473 | orchestrator | 2026-02-05 01:22:09.473492 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-05 01:22:09.473498 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 01:22:09.473503 | orchestrator | + attachment = (known after apply) 2026-02-05 01:22:09.473509 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.473514 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.473520 | orchestrator | + metadata = (known after apply) 2026-02-05 01:22:09.473525 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-05 01:22:09.473531 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.473536 | orchestrator | + size = 20 2026-02-05 01:22:09.473542 | orchestrator | + volume_retype_policy = "never" 2026-02-05 01:22:09.473547 | orchestrator | + volume_type = "ssd" 2026-02-05 01:22:09.473553 | orchestrator | } 2026-02-05 01:22:09.473826 | orchestrator | 2026-02-05 01:22:09.473846 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-05 01:22:09.473853 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-05 01:22:09.473858 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 01:22:09.473864 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 01:22:09.473869 | orchestrator | + all_metadata = (known after apply) 2026-02-05 01:22:09.473875 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.473880 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.473886 | orchestrator | + config_drive = true 2026-02-05 01:22:09.473901 | orchestrator | + created = (known after apply) 2026-02-05 01:22:09.473907 | orchestrator | + flavor_id = (known after apply) 2026-02-05 01:22:09.473913 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-05 01:22:09.473918 | orchestrator | + force_delete = false 2026-02-05 01:22:09.473923 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 01:22:09.473929 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.473934 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.473940 | orchestrator | + image_name = (known after apply) 2026-02-05 01:22:09.473945 | orchestrator | + key_pair = "testbed" 2026-02-05 01:22:09.473951 | orchestrator | + name = "testbed-manager" 2026-02-05 01:22:09.473956 | orchestrator | + power_state = "active" 2026-02-05 01:22:09.473961 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.473967 | orchestrator | + security_groups = (known after apply) 2026-02-05 01:22:09.473972 | orchestrator | + stop_before_destroy = false 2026-02-05 01:22:09.473978 | orchestrator | + updated = (known after apply) 2026-02-05 01:22:09.473983 | orchestrator | + user_data = (sensitive value) 2026-02-05 01:22:09.473988 | orchestrator | 2026-02-05 01:22:09.473994 | orchestrator | + block_device { 2026-02-05 01:22:09.474000 | orchestrator | + boot_index = 0 2026-02-05 01:22:09.474005 | orchestrator | + delete_on_termination = false 2026-02-05 01:22:09.474011 | orchestrator | + destination_type = "volume" 2026-02-05 01:22:09.474039 | orchestrator | + multiattach = false 2026-02-05 01:22:09.474044 | orchestrator | + source_type = "volume" 2026-02-05 01:22:09.474050 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.474061 | orchestrator | } 2026-02-05 01:22:09.474066 | orchestrator | 2026-02-05 01:22:09.474072 | orchestrator | + network { 2026-02-05 01:22:09.474078 | orchestrator | + access_network = false 2026-02-05 01:22:09.474083 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 01:22:09.474089 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 01:22:09.474094 | orchestrator | + mac = (known after apply) 2026-02-05 01:22:09.474100 | orchestrator | + name = (known after apply) 2026-02-05 01:22:09.474105 | orchestrator | + port = (known after apply) 2026-02-05 01:22:09.474111 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.474117 | orchestrator | } 2026-02-05 01:22:09.474122 | orchestrator | } 2026-02-05 01:22:09.474406 | orchestrator | 2026-02-05 01:22:09.474425 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-05 01:22:09.474431 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 01:22:09.474437 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 01:22:09.474442 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 01:22:09.474448 | orchestrator | + all_metadata = (known after apply) 2026-02-05 01:22:09.474453 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.474459 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.474464 | orchestrator | + config_drive = true 2026-02-05 01:22:09.474470 | orchestrator | + created = (known after apply) 2026-02-05 01:22:09.474475 | orchestrator | + flavor_id = (known after apply) 2026-02-05 01:22:09.474481 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 01:22:09.474486 | orchestrator | + force_delete = false 2026-02-05 01:22:09.474492 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 01:22:09.474497 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.474503 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.474508 | orchestrator | + image_name = (known after apply) 2026-02-05 01:22:09.474514 | orchestrator | + key_pair = "testbed" 2026-02-05 01:22:09.474520 | orchestrator | + name = "testbed-node-0" 2026-02-05 01:22:09.474525 | orchestrator | + power_state = "active" 2026-02-05 01:22:09.474530 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.474536 | orchestrator | + security_groups = (known after apply) 2026-02-05 01:22:09.474541 | orchestrator | + stop_before_destroy = false 2026-02-05 01:22:09.474547 | orchestrator | + updated = (known after apply) 2026-02-05 01:22:09.474552 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 01:22:09.474558 | orchestrator | 2026-02-05 01:22:09.474563 | orchestrator | + block_device { 2026-02-05 01:22:09.474569 | orchestrator | + boot_index = 0 2026-02-05 01:22:09.474575 | orchestrator | + delete_on_termination = false 2026-02-05 01:22:09.474580 | orchestrator | + destination_type = "volume" 2026-02-05 01:22:09.474585 | orchestrator | + multiattach = false 2026-02-05 01:22:09.474591 | orchestrator | + source_type = "volume" 2026-02-05 01:22:09.474596 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.474602 | orchestrator | } 2026-02-05 01:22:09.474607 | orchestrator | 2026-02-05 01:22:09.474613 | orchestrator | + network { 2026-02-05 01:22:09.474618 | orchestrator | + access_network = false 2026-02-05 01:22:09.474624 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 01:22:09.474629 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 01:22:09.474634 | orchestrator | + mac = (known after apply) 2026-02-05 01:22:09.474640 | orchestrator | + name = (known after apply) 2026-02-05 01:22:09.474645 | orchestrator | + port = (known after apply) 2026-02-05 01:22:09.474651 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.474656 | orchestrator | } 2026-02-05 01:22:09.474662 | orchestrator | } 2026-02-05 01:22:09.474905 | orchestrator | 2026-02-05 01:22:09.474921 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-05 01:22:09.474927 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 01:22:09.474933 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 01:22:09.474943 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 01:22:09.474949 | orchestrator | + all_metadata = (known after apply) 2026-02-05 01:22:09.474954 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.474960 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.474965 | orchestrator | + config_drive = true 2026-02-05 01:22:09.474971 | orchestrator | + created = (known after apply) 2026-02-05 01:22:09.474976 | orchestrator | + flavor_id = (known after apply) 2026-02-05 01:22:09.474981 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 01:22:09.474987 | orchestrator | + force_delete = false 2026-02-05 01:22:09.474992 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 01:22:09.474998 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.475003 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.475009 | orchestrator | + image_name = (known after apply) 2026-02-05 01:22:09.475015 | orchestrator | + key_pair = "testbed" 2026-02-05 01:22:09.475020 | orchestrator | + name = "testbed-node-1" 2026-02-05 01:22:09.475025 | orchestrator | + power_state = "active" 2026-02-05 01:22:09.475031 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.475037 | orchestrator | + security_groups = (known after apply) 2026-02-05 01:22:09.475042 | orchestrator | + stop_before_destroy = false 2026-02-05 01:22:09.475048 | orchestrator | + updated = (known after apply) 2026-02-05 01:22:09.475057 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 01:22:09.475063 | orchestrator | 2026-02-05 01:22:09.475068 | orchestrator | + block_device { 2026-02-05 01:22:09.475074 | orchestrator | + boot_index = 0 2026-02-05 01:22:09.475079 | orchestrator | + delete_on_termination = false 2026-02-05 01:22:09.475085 | orchestrator | + destination_type = "volume" 2026-02-05 01:22:09.475090 | orchestrator | + multiattach = false 2026-02-05 01:22:09.475096 | orchestrator | + source_type = "volume" 2026-02-05 01:22:09.475101 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.475107 | orchestrator | } 2026-02-05 01:22:09.475112 | orchestrator | 2026-02-05 01:22:09.475118 | orchestrator | + network { 2026-02-05 01:22:09.475123 | orchestrator | + access_network = false 2026-02-05 01:22:09.475129 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 01:22:09.475134 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 01:22:09.475139 | orchestrator | + mac = (known after apply) 2026-02-05 01:22:09.475145 | orchestrator | + name = (known after apply) 2026-02-05 01:22:09.475150 | orchestrator | + port = (known after apply) 2026-02-05 01:22:09.475156 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.475161 | orchestrator | } 2026-02-05 01:22:09.475167 | orchestrator | } 2026-02-05 01:22:09.475463 | orchestrator | 2026-02-05 01:22:09.475482 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-05 01:22:09.475489 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 01:22:09.475494 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 01:22:09.475500 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 01:22:09.475507 | orchestrator | + all_metadata = (known after apply) 2026-02-05 01:22:09.475512 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.475518 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.475523 | orchestrator | + config_drive = true 2026-02-05 01:22:09.475529 | orchestrator | + created = (known after apply) 2026-02-05 01:22:09.475534 | orchestrator | + flavor_id = (known after apply) 2026-02-05 01:22:09.475540 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 01:22:09.475545 | orchestrator | + force_delete = false 2026-02-05 01:22:09.475551 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 01:22:09.475556 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.475561 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.475572 | orchestrator | + image_name = (known after apply) 2026-02-05 01:22:09.475578 | orchestrator | + key_pair = "testbed" 2026-02-05 01:22:09.475583 | orchestrator | + name = "testbed-node-2" 2026-02-05 01:22:09.475589 | orchestrator | + power_state = "active" 2026-02-05 01:22:09.475594 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.475600 | orchestrator | + security_groups = (known after apply) 2026-02-05 01:22:09.475605 | orchestrator | + stop_before_destroy = false 2026-02-05 01:22:09.475611 | orchestrator | + updated = (known after apply) 2026-02-05 01:22:09.475616 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 01:22:09.475622 | orchestrator | 2026-02-05 01:22:09.475627 | orchestrator | + block_device { 2026-02-05 01:22:09.475633 | orchestrator | + boot_index = 0 2026-02-05 01:22:09.475638 | orchestrator | + delete_on_termination = false 2026-02-05 01:22:09.475644 | orchestrator | + destination_type = "volume" 2026-02-05 01:22:09.475649 | orchestrator | + multiattach = false 2026-02-05 01:22:09.475654 | orchestrator | + source_type = "volume" 2026-02-05 01:22:09.475660 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.475665 | orchestrator | } 2026-02-05 01:22:09.475671 | orchestrator | 2026-02-05 01:22:09.475676 | orchestrator | + network { 2026-02-05 01:22:09.475682 | orchestrator | + access_network = false 2026-02-05 01:22:09.475687 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 01:22:09.475692 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 01:22:09.475698 | orchestrator | + mac = (known after apply) 2026-02-05 01:22:09.475703 | orchestrator | + name = (known after apply) 2026-02-05 01:22:09.475709 | orchestrator | + port = (known after apply) 2026-02-05 01:22:09.475714 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.475719 | orchestrator | } 2026-02-05 01:22:09.475725 | orchestrator | } 2026-02-05 01:22:09.476022 | orchestrator | 2026-02-05 01:22:09.476044 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-05 01:22:09.476050 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 01:22:09.476056 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 01:22:09.476061 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 01:22:09.476067 | orchestrator | + all_metadata = (known after apply) 2026-02-05 01:22:09.476072 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.476078 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.476083 | orchestrator | + config_drive = true 2026-02-05 01:22:09.476089 | orchestrator | + created = (known after apply) 2026-02-05 01:22:09.476094 | orchestrator | + flavor_id = (known after apply) 2026-02-05 01:22:09.476100 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 01:22:09.476105 | orchestrator | + force_delete = false 2026-02-05 01:22:09.476111 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 01:22:09.476116 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.476121 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.476127 | orchestrator | + image_name = (known after apply) 2026-02-05 01:22:09.476132 | orchestrator | + key_pair = "testbed" 2026-02-05 01:22:09.476138 | orchestrator | + name = "testbed-node-3" 2026-02-05 01:22:09.476143 | orchestrator | + power_state = "active" 2026-02-05 01:22:09.476149 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.476154 | orchestrator | + security_groups = (known after apply) 2026-02-05 01:22:09.476160 | orchestrator | + stop_before_destroy = false 2026-02-05 01:22:09.476165 | orchestrator | + updated = (known after apply) 2026-02-05 01:22:09.476171 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 01:22:09.476176 | orchestrator | 2026-02-05 01:22:09.476182 | orchestrator | + block_device { 2026-02-05 01:22:09.476187 | orchestrator | + boot_index = 0 2026-02-05 01:22:09.476193 | orchestrator | + delete_on_termination = false 2026-02-05 01:22:09.476198 | orchestrator | + destination_type = "volume" 2026-02-05 01:22:09.476208 | orchestrator | + multiattach = false 2026-02-05 01:22:09.476214 | orchestrator | + source_type = "volume" 2026-02-05 01:22:09.476219 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.476225 | orchestrator | } 2026-02-05 01:22:09.476230 | orchestrator | 2026-02-05 01:22:09.476236 | orchestrator | + network { 2026-02-05 01:22:09.476241 | orchestrator | + access_network = false 2026-02-05 01:22:09.476247 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 01:22:09.476252 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 01:22:09.476258 | orchestrator | + mac = (known after apply) 2026-02-05 01:22:09.476263 | orchestrator | + name = (known after apply) 2026-02-05 01:22:09.476269 | orchestrator | + port = (known after apply) 2026-02-05 01:22:09.476274 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.476280 | orchestrator | } 2026-02-05 01:22:09.476285 | orchestrator | } 2026-02-05 01:22:09.476549 | orchestrator | 2026-02-05 01:22:09.476567 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-05 01:22:09.476574 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 01:22:09.476579 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 01:22:09.476585 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 01:22:09.476590 | orchestrator | + all_metadata = (known after apply) 2026-02-05 01:22:09.476596 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.476601 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.476606 | orchestrator | + config_drive = true 2026-02-05 01:22:09.476612 | orchestrator | + created = (known after apply) 2026-02-05 01:22:09.476617 | orchestrator | + flavor_id = (known after apply) 2026-02-05 01:22:09.476623 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 01:22:09.476628 | orchestrator | + force_delete = false 2026-02-05 01:22:09.476634 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 01:22:09.476639 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.476645 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.476650 | orchestrator | + image_name = (known after apply) 2026-02-05 01:22:09.476656 | orchestrator | + key_pair = "testbed" 2026-02-05 01:22:09.476661 | orchestrator | + name = "testbed-node-4" 2026-02-05 01:22:09.476667 | orchestrator | + power_state = "active" 2026-02-05 01:22:09.476672 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.476678 | orchestrator | + security_groups = (known after apply) 2026-02-05 01:22:09.476683 | orchestrator | + stop_before_destroy = false 2026-02-05 01:22:09.476688 | orchestrator | + updated = (known after apply) 2026-02-05 01:22:09.476694 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 01:22:09.476700 | orchestrator | 2026-02-05 01:22:09.476705 | orchestrator | + block_device { 2026-02-05 01:22:09.476711 | orchestrator | + boot_index = 0 2026-02-05 01:22:09.476716 | orchestrator | + delete_on_termination = false 2026-02-05 01:22:09.476722 | orchestrator | + destination_type = "volume" 2026-02-05 01:22:09.476727 | orchestrator | + multiattach = false 2026-02-05 01:22:09.476733 | orchestrator | + source_type = "volume" 2026-02-05 01:22:09.476738 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.476744 | orchestrator | } 2026-02-05 01:22:09.476749 | orchestrator | 2026-02-05 01:22:09.476755 | orchestrator | + network { 2026-02-05 01:22:09.476760 | orchestrator | + access_network = false 2026-02-05 01:22:09.476766 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 01:22:09.476771 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 01:22:09.476776 | orchestrator | + mac = (known after apply) 2026-02-05 01:22:09.476782 | orchestrator | + name = (known after apply) 2026-02-05 01:22:09.476787 | orchestrator | + port = (known after apply) 2026-02-05 01:22:09.476793 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.476798 | orchestrator | } 2026-02-05 01:22:09.476804 | orchestrator | } 2026-02-05 01:22:09.477048 | orchestrator | 2026-02-05 01:22:09.477065 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-05 01:22:09.477071 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 01:22:09.477077 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 01:22:09.477082 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 01:22:09.477088 | orchestrator | + all_metadata = (known after apply) 2026-02-05 01:22:09.477093 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.477099 | orchestrator | + availability_zone = "nova" 2026-02-05 01:22:09.477104 | orchestrator | + config_drive = true 2026-02-05 01:22:09.477110 | orchestrator | + created = (known after apply) 2026-02-05 01:22:09.477115 | orchestrator | + flavor_id = (known after apply) 2026-02-05 01:22:09.477120 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 01:22:09.477126 | orchestrator | + force_delete = false 2026-02-05 01:22:09.477131 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 01:22:09.477137 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.477142 | orchestrator | + image_id = (known after apply) 2026-02-05 01:22:09.477147 | orchestrator | + image_name = (known after apply) 2026-02-05 01:22:09.477153 | orchestrator | + key_pair = "testbed" 2026-02-05 01:22:09.477158 | orchestrator | + name = "testbed-node-5" 2026-02-05 01:22:09.477164 | orchestrator | + power_state = "active" 2026-02-05 01:22:09.477169 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.477174 | orchestrator | + security_groups = (known after apply) 2026-02-05 01:22:09.477180 | orchestrator | + stop_before_destroy = false 2026-02-05 01:22:09.477185 | orchestrator | + updated = (known after apply) 2026-02-05 01:22:09.477191 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 01:22:09.477196 | orchestrator | 2026-02-05 01:22:09.477202 | orchestrator | + block_device { 2026-02-05 01:22:09.477207 | orchestrator | + boot_index = 0 2026-02-05 01:22:09.477213 | orchestrator | + delete_on_termination = false 2026-02-05 01:22:09.477218 | orchestrator | + destination_type = "volume" 2026-02-05 01:22:09.477223 | orchestrator | + multiattach = false 2026-02-05 01:22:09.477229 | orchestrator | + source_type = "volume" 2026-02-05 01:22:09.477234 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.477240 | orchestrator | } 2026-02-05 01:22:09.477245 | orchestrator | 2026-02-05 01:22:09.477251 | orchestrator | + network { 2026-02-05 01:22:09.477256 | orchestrator | + access_network = false 2026-02-05 01:22:09.477262 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 01:22:09.477267 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 01:22:09.477273 | orchestrator | + mac = (known after apply) 2026-02-05 01:22:09.477278 | orchestrator | + name = (known after apply) 2026-02-05 01:22:09.477284 | orchestrator | + port = (known after apply) 2026-02-05 01:22:09.477289 | orchestrator | + uuid = (known after apply) 2026-02-05 01:22:09.477295 | orchestrator | } 2026-02-05 01:22:09.477300 | orchestrator | } 2026-02-05 01:22:09.477359 | orchestrator | 2026-02-05 01:22:09.477376 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-05 01:22:09.477382 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-05 01:22:09.477388 | orchestrator | + fingerprint = (known after apply) 2026-02-05 01:22:09.477432 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.477438 | orchestrator | + name = "testbed" 2026-02-05 01:22:09.477443 | orchestrator | + private_key = (sensitive value) 2026-02-05 01:22:09.477449 | orchestrator | + public_key = (known after apply) 2026-02-05 01:22:09.477454 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.477460 | orchestrator | + user_id = (known after apply) 2026-02-05 01:22:09.477465 | orchestrator | } 2026-02-05 01:22:09.477538 | orchestrator | 2026-02-05 01:22:09.477559 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-05 01:22:09.477565 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 01:22:09.477576 | orchestrator | + device = (known after apply) 2026-02-05 01:22:09.477582 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.477587 | orchestrator | + instance_id = (known after apply) 2026-02-05 01:22:09.477593 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.477602 | orchestrator | + volume_id = (known after apply) 2026-02-05 01:22:09.477608 | orchestrator | } 2026-02-05 01:22:09.477667 | orchestrator | 2026-02-05 01:22:09.477683 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-05 01:22:09.477689 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 01:22:09.477694 | orchestrator | + device = (known after apply) 2026-02-05 01:22:09.477700 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.477705 | orchestrator | + instance_id = (known after apply) 2026-02-05 01:22:09.477711 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.477717 | orchestrator | + volume_id = (known after apply) 2026-02-05 01:22:09.477722 | orchestrator | } 2026-02-05 01:22:09.477772 | orchestrator | 2026-02-05 01:22:09.477788 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-05 01:22:09.477794 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 01:22:09.477800 | orchestrator | + device = (known after apply) 2026-02-05 01:22:09.477805 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.477810 | orchestrator | + instance_id = (known after apply) 2026-02-05 01:22:09.477816 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.477821 | orchestrator | + volume_id = (known after apply) 2026-02-05 01:22:09.477827 | orchestrator | } 2026-02-05 01:22:09.477875 | orchestrator | 2026-02-05 01:22:09.477891 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-05 01:22:09.477898 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 01:22:09.477903 | orchestrator | + device = (known after apply) 2026-02-05 01:22:09.477908 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.477914 | orchestrator | + instance_id = (known after apply) 2026-02-05 01:22:09.477919 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.477925 | orchestrator | + volume_id = (known after apply) 2026-02-05 01:22:09.477930 | orchestrator | } 2026-02-05 01:22:09.477977 | orchestrator | 2026-02-05 01:22:09.477993 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-05 01:22:09.478000 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 01:22:09.478005 | orchestrator | + device = (known after apply) 2026-02-05 01:22:09.478011 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.478036 | orchestrator | + instance_id = (known after apply) 2026-02-05 01:22:09.478041 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.478047 | orchestrator | + volume_id = (known after apply) 2026-02-05 01:22:09.478052 | orchestrator | } 2026-02-05 01:22:09.478101 | orchestrator | 2026-02-05 01:22:09.478117 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-05 01:22:09.478124 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 01:22:09.478129 | orchestrator | + device = (known after apply) 2026-02-05 01:22:09.478135 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.478140 | orchestrator | + instance_id = (known after apply) 2026-02-05 01:22:09.478146 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.478151 | orchestrator | + volume_id = (known after apply) 2026-02-05 01:22:09.478157 | orchestrator | } 2026-02-05 01:22:09.478208 | orchestrator | 2026-02-05 01:22:09.478224 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-05 01:22:09.478230 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 01:22:09.478236 | orchestrator | + device = (known after apply) 2026-02-05 01:22:09.478241 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.478247 | orchestrator | + instance_id = (known after apply) 2026-02-05 01:22:09.478252 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.478262 | orchestrator | + volume_id = (known after apply) 2026-02-05 01:22:09.478268 | orchestrator | } 2026-02-05 01:22:09.478341 | orchestrator | 2026-02-05 01:22:09.478365 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-05 01:22:09.478374 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 01:22:09.478382 | orchestrator | + device = (known after apply) 2026-02-05 01:22:09.478407 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.478416 | orchestrator | + instance_id = (known after apply) 2026-02-05 01:22:09.478425 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.478433 | orchestrator | + volume_id = (known after apply) 2026-02-05 01:22:09.478441 | orchestrator | } 2026-02-05 01:22:09.478517 | orchestrator | 2026-02-05 01:22:09.478544 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-05 01:22:09.478555 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 01:22:09.478564 | orchestrator | + device = (known after apply) 2026-02-05 01:22:09.478573 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.478582 | orchestrator | + instance_id = (known after apply) 2026-02-05 01:22:09.478589 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.478595 | orchestrator | + volume_id = (known after apply) 2026-02-05 01:22:09.478600 | orchestrator | } 2026-02-05 01:22:09.478655 | orchestrator | 2026-02-05 01:22:09.478672 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-05 01:22:09.478679 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-05 01:22:09.478684 | orchestrator | + fixed_ip = (known after apply) 2026-02-05 01:22:09.478690 | orchestrator | + floating_ip = (known after apply) 2026-02-05 01:22:09.478696 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.478701 | orchestrator | + port_id = (known after apply) 2026-02-05 01:22:09.478707 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.478712 | orchestrator | } 2026-02-05 01:22:09.478804 | orchestrator | 2026-02-05 01:22:09.478822 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-05 01:22:09.478828 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-05 01:22:09.478834 | orchestrator | + address = (known after apply) 2026-02-05 01:22:09.478840 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.478852 | orchestrator | + dns_domain = (known after apply) 2026-02-05 01:22:09.478858 | orchestrator | + dns_name = (known after apply) 2026-02-05 01:22:09.478863 | orchestrator | + fixed_ip = (known after apply) 2026-02-05 01:22:09.478869 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.478874 | orchestrator | + pool = "public" 2026-02-05 01:22:09.478881 | orchestrator | + port_id = (known after apply) 2026-02-05 01:22:09.478886 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.478892 | orchestrator | + subnet_id = (known after apply) 2026-02-05 01:22:09.478897 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.478903 | orchestrator | } 2026-02-05 01:22:09.479123 | orchestrator | 2026-02-05 01:22:09.479144 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-05 01:22:09.479150 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-05 01:22:09.479156 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 01:22:09.479161 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.479167 | orchestrator | + availability_zone_hints = [ 2026-02-05 01:22:09.479173 | orchestrator | + "nova", 2026-02-05 01:22:09.479178 | orchestrator | ] 2026-02-05 01:22:09.479184 | orchestrator | + dns_domain = (known after apply) 2026-02-05 01:22:09.479189 | orchestrator | + external = (known after apply) 2026-02-05 01:22:09.479195 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.479200 | orchestrator | + mtu = (known after apply) 2026-02-05 01:22:09.479206 | orchestrator | + name = "net-testbed-management" 2026-02-05 01:22:09.479211 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 01:22:09.479225 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 01:22:09.479230 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.479236 | orchestrator | + shared = (known after apply) 2026-02-05 01:22:09.479241 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.479247 | orchestrator | + transparent_vlan = (known after apply) 2026-02-05 01:22:09.479252 | orchestrator | 2026-02-05 01:22:09.479258 | orchestrator | + segments (known after apply) 2026-02-05 01:22:09.479263 | orchestrator | } 2026-02-05 01:22:09.479474 | orchestrator | 2026-02-05 01:22:09.479494 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-05 01:22:09.479501 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-05 01:22:09.479506 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 01:22:09.479512 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 01:22:09.479517 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 01:22:09.479522 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.479528 | orchestrator | + device_id = (known after apply) 2026-02-05 01:22:09.479533 | orchestrator | + device_owner = (known after apply) 2026-02-05 01:22:09.479539 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 01:22:09.479544 | orchestrator | + dns_name = (known after apply) 2026-02-05 01:22:09.479549 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.479555 | orchestrator | + mac_address = (known after apply) 2026-02-05 01:22:09.479560 | orchestrator | + network_id = (known after apply) 2026-02-05 01:22:09.479565 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 01:22:09.479571 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 01:22:09.479576 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.479581 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 01:22:09.479586 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.479592 | orchestrator | 2026-02-05 01:22:09.479597 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.479603 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 01:22:09.479608 | orchestrator | } 2026-02-05 01:22:09.479613 | orchestrator | 2026-02-05 01:22:09.479619 | orchestrator | + binding (known after apply) 2026-02-05 01:22:09.479624 | orchestrator | 2026-02-05 01:22:09.479630 | orchestrator | + fixed_ip { 2026-02-05 01:22:09.479635 | orchestrator | + ip_address = "192.168.16.5" 2026-02-05 01:22:09.479641 | orchestrator | + subnet_id = (known after apply) 2026-02-05 01:22:09.479646 | orchestrator | } 2026-02-05 01:22:09.479652 | orchestrator | } 2026-02-05 01:22:09.479850 | orchestrator | 2026-02-05 01:22:09.479867 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-05 01:22:09.479874 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 01:22:09.479879 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 01:22:09.479885 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 01:22:09.479890 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 01:22:09.479895 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.479901 | orchestrator | + device_id = (known after apply) 2026-02-05 01:22:09.479906 | orchestrator | + device_owner = (known after apply) 2026-02-05 01:22:09.479912 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 01:22:09.479917 | orchestrator | + dns_name = (known after apply) 2026-02-05 01:22:09.479923 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.479928 | orchestrator | + mac_address = (known after apply) 2026-02-05 01:22:09.479933 | orchestrator | + network_id = (known after apply) 2026-02-05 01:22:09.479939 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 01:22:09.479944 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 01:22:09.479950 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.479961 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 01:22:09.479967 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.479972 | orchestrator | 2026-02-05 01:22:09.479978 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.479983 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 01:22:09.479988 | orchestrator | } 2026-02-05 01:22:09.479994 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.480000 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 01:22:09.480005 | orchestrator | } 2026-02-05 01:22:09.480010 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.480016 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 01:22:09.480021 | orchestrator | } 2026-02-05 01:22:09.480027 | orchestrator | 2026-02-05 01:22:09.480032 | orchestrator | + binding (known after apply) 2026-02-05 01:22:09.480038 | orchestrator | 2026-02-05 01:22:09.480043 | orchestrator | + fixed_ip { 2026-02-05 01:22:09.480048 | orchestrator | + ip_address = "192.168.16.10" 2026-02-05 01:22:09.480054 | orchestrator | + subnet_id = (known after apply) 2026-02-05 01:22:09.480059 | orchestrator | } 2026-02-05 01:22:09.480065 | orchestrator | } 2026-02-05 01:22:09.480262 | orchestrator | 2026-02-05 01:22:09.480281 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-05 01:22:09.480287 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 01:22:09.480297 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 01:22:09.480303 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 01:22:09.480309 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 01:22:09.480314 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.480319 | orchestrator | + device_id = (known after apply) 2026-02-05 01:22:09.480325 | orchestrator | + device_owner = (known after apply) 2026-02-05 01:22:09.480330 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 01:22:09.480336 | orchestrator | + dns_name = (known after apply) 2026-02-05 01:22:09.480341 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.480347 | orchestrator | + mac_address = (known after apply) 2026-02-05 01:22:09.480352 | orchestrator | + network_id = (known after apply) 2026-02-05 01:22:09.480358 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 01:22:09.480363 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 01:22:09.480368 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.480374 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 01:22:09.480379 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.480384 | orchestrator | 2026-02-05 01:22:09.480404 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.480410 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 01:22:09.480415 | orchestrator | } 2026-02-05 01:22:09.480421 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.480426 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 01:22:09.480432 | orchestrator | } 2026-02-05 01:22:09.480437 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.480443 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 01:22:09.480448 | orchestrator | } 2026-02-05 01:22:09.480453 | orchestrator | 2026-02-05 01:22:09.480459 | orchestrator | + binding (known after apply) 2026-02-05 01:22:09.480464 | orchestrator | 2026-02-05 01:22:09.480470 | orchestrator | + fixed_ip { 2026-02-05 01:22:09.480475 | orchestrator | + ip_address = "192.168.16.11" 2026-02-05 01:22:09.480481 | orchestrator | + subnet_id = (known after apply) 2026-02-05 01:22:09.480486 | orchestrator | } 2026-02-05 01:22:09.480492 | orchestrator | } 2026-02-05 01:22:09.480703 | orchestrator | 2026-02-05 01:22:09.480721 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-05 01:22:09.480727 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 01:22:09.480733 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 01:22:09.480739 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 01:22:09.480744 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 01:22:09.480750 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.480760 | orchestrator | + device_id = (known after apply) 2026-02-05 01:22:09.480766 | orchestrator | + device_owner = (known after apply) 2026-02-05 01:22:09.480772 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 01:22:09.480777 | orchestrator | + dns_name = (known after apply) 2026-02-05 01:22:09.480782 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.480788 | orchestrator | + mac_address = (known after apply) 2026-02-05 01:22:09.480793 | orchestrator | + network_id = (known after apply) 2026-02-05 01:22:09.480799 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 01:22:09.480804 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 01:22:09.480810 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.480815 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 01:22:09.480821 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.480826 | orchestrator | 2026-02-05 01:22:09.480832 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.480837 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 01:22:09.480843 | orchestrator | } 2026-02-05 01:22:09.480849 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.480854 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 01:22:09.480860 | orchestrator | } 2026-02-05 01:22:09.480865 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.480871 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 01:22:09.480876 | orchestrator | } 2026-02-05 01:22:09.480882 | orchestrator | 2026-02-05 01:22:09.480887 | orchestrator | + binding (known after apply) 2026-02-05 01:22:09.480893 | orchestrator | 2026-02-05 01:22:09.480899 | orchestrator | + fixed_ip { 2026-02-05 01:22:09.480904 | orchestrator | + ip_address = "192.168.16.12" 2026-02-05 01:22:09.480910 | orchestrator | + subnet_id = (known after apply) 2026-02-05 01:22:09.480916 | orchestrator | } 2026-02-05 01:22:09.480921 | orchestrator | } 2026-02-05 01:22:09.481118 | orchestrator | 2026-02-05 01:22:09.481135 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-05 01:22:09.481141 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 01:22:09.481147 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 01:22:09.481152 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 01:22:09.481158 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 01:22:09.481163 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.481169 | orchestrator | + device_id = (known after apply) 2026-02-05 01:22:09.481174 | orchestrator | + device_owner = (known after apply) 2026-02-05 01:22:09.481180 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 01:22:09.481185 | orchestrator | + dns_name = (known after apply) 2026-02-05 01:22:09.481190 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.481196 | orchestrator | + mac_address = (known after apply) 2026-02-05 01:22:09.481201 | orchestrator | + network_id = (known after apply) 2026-02-05 01:22:09.481207 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 01:22:09.481212 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 01:22:09.481218 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.481223 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 01:22:09.481228 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.481234 | orchestrator | 2026-02-05 01:22:09.481239 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.481245 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 01:22:09.481250 | orchestrator | } 2026-02-05 01:22:09.481256 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.481261 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 01:22:09.481266 | orchestrator | } 2026-02-05 01:22:09.481272 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.481277 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 01:22:09.481283 | orchestrator | } 2026-02-05 01:22:09.481288 | orchestrator | 2026-02-05 01:22:09.481301 | orchestrator | + binding (known after apply) 2026-02-05 01:22:09.481306 | orchestrator | 2026-02-05 01:22:09.481312 | orchestrator | + fixed_ip { 2026-02-05 01:22:09.481317 | orchestrator | + ip_address = "192.168.16.13" 2026-02-05 01:22:09.481323 | orchestrator | + subnet_id = (known after apply) 2026-02-05 01:22:09.481328 | orchestrator | } 2026-02-05 01:22:09.481333 | orchestrator | } 2026-02-05 01:22:09.481610 | orchestrator | 2026-02-05 01:22:09.481632 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-05 01:22:09.481638 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 01:22:09.481644 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 01:22:09.481649 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 01:22:09.481655 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 01:22:09.481660 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.481666 | orchestrator | + device_id = (known after apply) 2026-02-05 01:22:09.481671 | orchestrator | + device_owner = (known after apply) 2026-02-05 01:22:09.481677 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 01:22:09.481682 | orchestrator | + dns_name = (known after apply) 2026-02-05 01:22:09.481692 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.481698 | orchestrator | + mac_address = (known after apply) 2026-02-05 01:22:09.481703 | orchestrator | + network_id = (known after apply) 2026-02-05 01:22:09.481709 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 01:22:09.481714 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 01:22:09.481720 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.481725 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 01:22:09.481731 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.481737 | orchestrator | 2026-02-05 01:22:09.481743 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.481754 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 01:22:09.481760 | orchestrator | } 2026-02-05 01:22:09.481766 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.481771 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 01:22:09.481777 | orchestrator | } 2026-02-05 01:22:09.481782 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.481788 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 01:22:09.481793 | orchestrator | } 2026-02-05 01:22:09.481798 | orchestrator | 2026-02-05 01:22:09.481804 | orchestrator | + binding (known after apply) 2026-02-05 01:22:09.481809 | orchestrator | 2026-02-05 01:22:09.481815 | orchestrator | + fixed_ip { 2026-02-05 01:22:09.481820 | orchestrator | + ip_address = "192.168.16.14" 2026-02-05 01:22:09.481826 | orchestrator | + subnet_id = (known after apply) 2026-02-05 01:22:09.481831 | orchestrator | } 2026-02-05 01:22:09.481837 | orchestrator | } 2026-02-05 01:22:09.482000 | orchestrator | 2026-02-05 01:22:09.482034 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-05 01:22:09.482041 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 01:22:09.482046 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 01:22:09.482051 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 01:22:09.482056 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 01:22:09.482061 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.482066 | orchestrator | + device_id = (known after apply) 2026-02-05 01:22:09.482071 | orchestrator | + device_owner = (known after apply) 2026-02-05 01:22:09.482076 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 01:22:09.482081 | orchestrator | + dns_name = (known after apply) 2026-02-05 01:22:09.482086 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.482091 | orchestrator | + mac_address = (known after apply) 2026-02-05 01:22:09.482096 | orchestrator | + network_id = (known after apply) 2026-02-05 01:22:09.482100 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 01:22:09.482105 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 01:22:09.482116 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.482121 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 01:22:09.482125 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.482130 | orchestrator | 2026-02-05 01:22:09.482135 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.482140 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 01:22:09.482145 | orchestrator | } 2026-02-05 01:22:09.482150 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.482155 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 01:22:09.482159 | orchestrator | } 2026-02-05 01:22:09.482164 | orchestrator | + allowed_address_pairs { 2026-02-05 01:22:09.482169 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 01:22:09.482174 | orchestrator | } 2026-02-05 01:22:09.482179 | orchestrator | 2026-02-05 01:22:09.482184 | orchestrator | + binding (known after apply) 2026-02-05 01:22:09.482189 | orchestrator | 2026-02-05 01:22:09.482194 | orchestrator | + fixed_ip { 2026-02-05 01:22:09.482199 | orchestrator | + ip_address = "192.168.16.15" 2026-02-05 01:22:09.482204 | orchestrator | + subnet_id = (known after apply) 2026-02-05 01:22:09.482209 | orchestrator | } 2026-02-05 01:22:09.482213 | orchestrator | } 2026-02-05 01:22:09.482273 | orchestrator | 2026-02-05 01:22:09.482287 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-05 01:22:09.482292 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-05 01:22:09.482297 | orchestrator | + force_destroy = false 2026-02-05 01:22:09.482302 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.482307 | orchestrator | + port_id = (known after apply) 2026-02-05 01:22:09.482312 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.482317 | orchestrator | + router_id = (known after apply) 2026-02-05 01:22:09.482322 | orchestrator | + subnet_id = (known after apply) 2026-02-05 01:22:09.482327 | orchestrator | } 2026-02-05 01:22:09.482445 | orchestrator | 2026-02-05 01:22:09.482461 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-05 01:22:09.482467 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-05 01:22:09.482472 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 01:22:09.482477 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.482482 | orchestrator | + availability_zone_hints = [ 2026-02-05 01:22:09.482486 | orchestrator | + "nova", 2026-02-05 01:22:09.482491 | orchestrator | ] 2026-02-05 01:22:09.482496 | orchestrator | + distributed = (known after apply) 2026-02-05 01:22:09.482501 | orchestrator | + enable_snat = (known after apply) 2026-02-05 01:22:09.482506 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-05 01:22:09.482511 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-05 01:22:09.482516 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.482521 | orchestrator | + name = "testbed" 2026-02-05 01:22:09.482526 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.482530 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.482535 | orchestrator | 2026-02-05 01:22:09.482540 | orchestrator | + external_fixed_ip (known after apply) 2026-02-05 01:22:09.482545 | orchestrator | } 2026-02-05 01:22:09.482646 | orchestrator | 2026-02-05 01:22:09.482660 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-05 01:22:09.482667 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-05 01:22:09.482672 | orchestrator | + description = "ssh" 2026-02-05 01:22:09.482676 | orchestrator | + direction = "ingress" 2026-02-05 01:22:09.482681 | orchestrator | + ethertype = "IPv4" 2026-02-05 01:22:09.482686 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.482691 | orchestrator | + port_range_max = 22 2026-02-05 01:22:09.482696 | orchestrator | + port_range_min = 22 2026-02-05 01:22:09.482701 | orchestrator | + protocol = "tcp" 2026-02-05 01:22:09.482706 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.482716 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 01:22:09.482721 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 01:22:09.482725 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 01:22:09.482730 | orchestrator | + security_group_id = (known after apply) 2026-02-05 01:22:09.482735 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.482740 | orchestrator | } 2026-02-05 01:22:09.482838 | orchestrator | 2026-02-05 01:22:09.482853 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-05 01:22:09.482859 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-05 01:22:09.482864 | orchestrator | + description = "wireguard" 2026-02-05 01:22:09.482868 | orchestrator | + direction = "ingress" 2026-02-05 01:22:09.482873 | orchestrator | + ethertype = "IPv4" 2026-02-05 01:22:09.482878 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.482883 | orchestrator | + port_range_max = 51820 2026-02-05 01:22:09.482888 | orchestrator | + port_range_min = 51820 2026-02-05 01:22:09.482893 | orchestrator | + protocol = "udp" 2026-02-05 01:22:09.482898 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.482903 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 01:22:09.482907 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 01:22:09.482912 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 01:22:09.482917 | orchestrator | + security_group_id = (known after apply) 2026-02-05 01:22:09.482922 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.482927 | orchestrator | } 2026-02-05 01:22:09.483005 | orchestrator | 2026-02-05 01:22:09.483019 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-05 01:22:09.483025 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-05 01:22:09.483035 | orchestrator | + direction = "ingress" 2026-02-05 01:22:09.483040 | orchestrator | + ethertype = "IPv4" 2026-02-05 01:22:09.483044 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.483049 | orchestrator | + protocol = "tcp" 2026-02-05 01:22:09.483054 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.483059 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 01:22:09.483064 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 01:22:09.483068 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-05 01:22:09.483073 | orchestrator | + security_group_id = (known after apply) 2026-02-05 01:22:09.483078 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.483083 | orchestrator | } 2026-02-05 01:22:09.483159 | orchestrator | 2026-02-05 01:22:09.483173 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-05 01:22:09.483179 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-05 01:22:09.483183 | orchestrator | + direction = "ingress" 2026-02-05 01:22:09.483188 | orchestrator | + ethertype = "IPv4" 2026-02-05 01:22:09.483193 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.483198 | orchestrator | + protocol = "udp" 2026-02-05 01:22:09.483203 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.483208 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 01:22:09.483212 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 01:22:09.483217 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-05 01:22:09.483222 | orchestrator | + security_group_id = (known after apply) 2026-02-05 01:22:09.483227 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.483232 | orchestrator | } 2026-02-05 01:22:09.483309 | orchestrator | 2026-02-05 01:22:09.483324 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-05 01:22:09.483334 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-05 01:22:09.483339 | orchestrator | + direction = "ingress" 2026-02-05 01:22:09.483344 | orchestrator | + ethertype = "IPv4" 2026-02-05 01:22:09.483349 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.483354 | orchestrator | + protocol = "icmp" 2026-02-05 01:22:09.483358 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.483363 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 01:22:09.483368 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 01:22:09.483373 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 01:22:09.483378 | orchestrator | + security_group_id = (known after apply) 2026-02-05 01:22:09.483383 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.483388 | orchestrator | } 2026-02-05 01:22:09.483485 | orchestrator | 2026-02-05 01:22:09.483499 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-05 01:22:09.483504 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-05 01:22:09.483509 | orchestrator | + direction = "ingress" 2026-02-05 01:22:09.483514 | orchestrator | + ethertype = "IPv4" 2026-02-05 01:22:09.483519 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.483524 | orchestrator | + protocol = "tcp" 2026-02-05 01:22:09.483529 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.483534 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 01:22:09.483539 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 01:22:09.483544 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 01:22:09.483549 | orchestrator | + security_group_id = (known after apply) 2026-02-05 01:22:09.483554 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.483558 | orchestrator | } 2026-02-05 01:22:09.483638 | orchestrator | 2026-02-05 01:22:09.483653 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-05 01:22:09.483659 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-05 01:22:09.483664 | orchestrator | + direction = "ingress" 2026-02-05 01:22:09.483668 | orchestrator | + ethertype = "IPv4" 2026-02-05 01:22:09.483673 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.483678 | orchestrator | + protocol = "udp" 2026-02-05 01:22:09.483683 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.483688 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 01:22:09.483693 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 01:22:09.483698 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 01:22:09.483702 | orchestrator | + security_group_id = (known after apply) 2026-02-05 01:22:09.483707 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.483712 | orchestrator | } 2026-02-05 01:22:09.483789 | orchestrator | 2026-02-05 01:22:09.483803 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-05 01:22:09.483809 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-05 01:22:09.483814 | orchestrator | + direction = "ingress" 2026-02-05 01:22:09.483819 | orchestrator | + ethertype = "IPv4" 2026-02-05 01:22:09.483823 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.483828 | orchestrator | + protocol = "icmp" 2026-02-05 01:22:09.483833 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.483838 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 01:22:09.483843 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 01:22:09.483848 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 01:22:09.483853 | orchestrator | + security_group_id = (known after apply) 2026-02-05 01:22:09.483858 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.483867 | orchestrator | } 2026-02-05 01:22:09.483951 | orchestrator | 2026-02-05 01:22:09.483967 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-05 01:22:09.483973 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-05 01:22:09.483978 | orchestrator | + description = "vrrp" 2026-02-05 01:22:09.483983 | orchestrator | + direction = "ingress" 2026-02-05 01:22:09.483988 | orchestrator | + ethertype = "IPv4" 2026-02-05 01:22:09.483993 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.483998 | orchestrator | + protocol = "112" 2026-02-05 01:22:09.484003 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.484007 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 01:22:09.484012 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 01:22:09.484017 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 01:22:09.484022 | orchestrator | + security_group_id = (known after apply) 2026-02-05 01:22:09.484027 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.484032 | orchestrator | } 2026-02-05 01:22:09.484093 | orchestrator | 2026-02-05 01:22:09.484107 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-05 01:22:09.484113 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-05 01:22:09.484118 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.484123 | orchestrator | + description = "management security group" 2026-02-05 01:22:09.484128 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.484133 | orchestrator | + name = "testbed-management" 2026-02-05 01:22:09.484138 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.484143 | orchestrator | + stateful = (known after apply) 2026-02-05 01:22:09.484147 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.484152 | orchestrator | } 2026-02-05 01:22:09.484209 | orchestrator | 2026-02-05 01:22:09.484224 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-05 01:22:09.484229 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-05 01:22:09.484234 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.484239 | orchestrator | + description = "node security group" 2026-02-05 01:22:09.484244 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.484249 | orchestrator | + name = "testbed-node" 2026-02-05 01:22:09.484254 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.484258 | orchestrator | + stateful = (known after apply) 2026-02-05 01:22:09.484263 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.484268 | orchestrator | } 2026-02-05 01:22:09.484435 | orchestrator | 2026-02-05 01:22:09.484452 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-05 01:22:09.484458 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-05 01:22:09.484463 | orchestrator | + all_tags = (known after apply) 2026-02-05 01:22:09.484468 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-05 01:22:09.484473 | orchestrator | + dns_nameservers = [ 2026-02-05 01:22:09.484478 | orchestrator | + "8.8.8.8", 2026-02-05 01:22:09.484483 | orchestrator | + "9.9.9.9", 2026-02-05 01:22:09.484488 | orchestrator | ] 2026-02-05 01:22:09.484493 | orchestrator | + enable_dhcp = true 2026-02-05 01:22:09.484498 | orchestrator | + gateway_ip = (known after apply) 2026-02-05 01:22:09.484508 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.484513 | orchestrator | + ip_version = 4 2026-02-05 01:22:09.484518 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-05 01:22:09.484523 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-05 01:22:09.484528 | orchestrator | + name = "subnet-testbed-management" 2026-02-05 01:22:09.484532 | orchestrator | + network_id = (known after apply) 2026-02-05 01:22:09.484537 | orchestrator | + no_gateway = false 2026-02-05 01:22:09.484542 | orchestrator | + region = (known after apply) 2026-02-05 01:22:09.484547 | orchestrator | + service_types = (known after apply) 2026-02-05 01:22:09.484558 | orchestrator | + tenant_id = (known after apply) 2026-02-05 01:22:09.484562 | orchestrator | 2026-02-05 01:22:09.484567 | orchestrator | + allocation_pool { 2026-02-05 01:22:09.484572 | orchestrator | + end = "192.168.31.250" 2026-02-05 01:22:09.484577 | orchestrator | + start = "192.168.31.200" 2026-02-05 01:22:09.484582 | orchestrator | } 2026-02-05 01:22:09.484587 | orchestrator | } 2026-02-05 01:22:09.484625 | orchestrator | 2026-02-05 01:22:09.484640 | orchestrator | # terraform_data.image will be created 2026-02-05 01:22:09.484645 | orchestrator | + resource "terraform_data" "image" { 2026-02-05 01:22:09.484650 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.484655 | orchestrator | + input = "Ubuntu 24.04" 2026-02-05 01:22:09.484660 | orchestrator | + output = (known after apply) 2026-02-05 01:22:09.484665 | orchestrator | } 2026-02-05 01:22:09.484704 | orchestrator | 2026-02-05 01:22:09.484718 | orchestrator | # terraform_data.image_node will be created 2026-02-05 01:22:09.484723 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-05 01:22:09.484728 | orchestrator | + id = (known after apply) 2026-02-05 01:22:09.484733 | orchestrator | + input = "Ubuntu 24.04" 2026-02-05 01:22:09.484738 | orchestrator | + output = (known after apply) 2026-02-05 01:22:09.484743 | orchestrator | } 2026-02-05 01:22:09.484760 | orchestrator | 2026-02-05 01:22:09.484766 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-05 01:22:09.484779 | orchestrator | 2026-02-05 01:22:09.484784 | orchestrator | Changes to Outputs: 2026-02-05 01:22:09.484797 | orchestrator | + manager_address = (sensitive value) 2026-02-05 01:22:09.484802 | orchestrator | + private_key = (sensitive value) 2026-02-05 01:22:09.740863 | orchestrator | terraform_data.image: Creating... 2026-02-05 01:22:09.741160 | orchestrator | terraform_data.image: Creation complete after 0s [id=dff82f45-3763-447c-3caa-2af86e2f6047] 2026-02-05 01:22:09.741991 | orchestrator | terraform_data.image_node: Creating... 2026-02-05 01:22:09.742817 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=a8af3ea2-7bf7-0403-e26b-294c6d91552c] 2026-02-05 01:22:09.774617 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-05 01:22:09.774708 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-05 01:22:09.784338 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-05 01:22:09.784602 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-05 01:22:09.784906 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-05 01:22:09.785029 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-05 01:22:09.785927 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-05 01:22:09.788668 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-05 01:22:09.790790 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-05 01:22:09.800607 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-05 01:22:10.262834 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-05 01:22:10.268702 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-05 01:22:10.274482 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-05 01:22:10.275665 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-05 01:22:10.341116 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-02-05 01:22:10.345069 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-05 01:22:10.764545 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=3bdf5713-e7da-454d-8165-70cd5988d654] 2026-02-05 01:22:10.773719 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-05 01:22:13.396502 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=9d4195ed-cd70-4bda-970e-203e54c5de2a] 2026-02-05 01:22:13.401649 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-05 01:22:13.407358 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=67112651-7f80-4cd8-91f1-cb61626610a2] 2026-02-05 01:22:13.422051 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-05 01:22:13.428092 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=e3013df6-5c5e-4503-84f9-a700edabdb49] 2026-02-05 01:22:13.435213 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-05 01:22:13.442828 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=fbfcf598-94c5-41e4-b7a9-e869a71c977b] 2026-02-05 01:22:13.451514 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=1b9ba281-c2e6-4817-9dab-91e9708a21dc] 2026-02-05 01:22:13.454078 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-05 01:22:13.466735 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-05 01:22:13.467324 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=64f88b59-145a-4204-a5cc-35bb4626474a] 2026-02-05 01:22:13.472883 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-05 01:22:13.492238 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=41a73991-c162-41f3-bbc6-bb80a44790ff] 2026-02-05 01:22:13.509170 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=93de9619-194c-45d0-9020-848f0c7631a9] 2026-02-05 01:22:13.509666 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-05 01:22:13.518478 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=3cde925a1a852fb78ee4cf3e7443449b985824e1] 2026-02-05 01:22:13.523519 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-05 01:22:13.527102 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-05 01:22:13.534446 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=88cb7f4cfe9087690ec2d1c2460a747a0e9ed430] 2026-02-05 01:22:13.554257 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=46213c6d-7232-49e5-8bd8-8f24dba1e930] 2026-02-05 01:22:14.107137 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=7aa79787-b159-4a57-a4f1-e1205678d581] 2026-02-05 01:22:14.740503 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=4c32a214-8923-489a-8f2e-8cb26065f9fd] 2026-02-05 01:22:14.747617 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-05 01:22:16.765500 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde] 2026-02-05 01:22:16.787191 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=62c048b1-5f64-433a-b6c9-e2210ab077fa] 2026-02-05 01:22:16.811847 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=b5fa98ac-44dd-4c0e-a983-67c120325b97] 2026-02-05 01:22:16.850717 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=c6fc2347-eb32-4949-8ca3-7fc5e42443e4] 2026-02-05 01:22:16.863157 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=48b9971a-a594-48d0-a5ef-0421396a811f] 2026-02-05 01:22:16.918568 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=91e0d2c4-9998-4651-b894-475b8cd3188f] 2026-02-05 01:22:17.255257 | orchestrator | openstack_networking_router_v2.router: Creation complete after 2s [id=fb8f2c3e-7be0-4ce0-a127-b3e4a856001a] 2026-02-05 01:22:17.261695 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-05 01:22:17.262884 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-05 01:22:17.263969 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-05 01:22:17.441196 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=f29dd6f9-f30a-4974-9051-d3902b51d366] 2026-02-05 01:22:17.448560 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-05 01:22:17.448812 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-05 01:22:17.448881 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-05 01:22:17.451688 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-05 01:22:17.451962 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-05 01:22:17.459125 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-05 01:22:17.465420 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=85226c95-1337-479d-93f2-3419dfe1c322] 2026-02-05 01:22:17.470611 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-05 01:22:17.472367 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-05 01:22:17.474089 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-05 01:22:17.619834 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=de3bc51f-b399-4620-9c18-9b451a79517e] 2026-02-05 01:22:17.632536 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-05 01:22:17.674703 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=ba7ef48b-94ad-4e93-99ab-c7cd7fef5def] 2026-02-05 01:22:17.690794 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-05 01:22:17.804861 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=7957b026-f898-4894-b58d-5f0a2821ed1e] 2026-02-05 01:22:17.814860 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-05 01:22:17.878374 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=d9877a8b-c206-4c23-92ea-2037e1d6a5b4] 2026-02-05 01:22:17.894064 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-05 01:22:17.978542 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=49ac0339-b37b-4c14-9d24-475dd04a5500] 2026-02-05 01:22:17.989523 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=4f003a42-b486-4b4a-9318-5c7d37920477] 2026-02-05 01:22:17.990996 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-05 01:22:17.994445 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-05 01:22:18.090436 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=eb64315e-3b79-4252-802a-64d709186e93] 2026-02-05 01:22:18.098710 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-05 01:22:18.253866 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=738ef432-e48c-4290-bc11-9fece27dc5d5] 2026-02-05 01:22:18.309525 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=1b183eb4-f3a2-49d9-af81-0fa5cf65a9a4] 2026-02-05 01:22:18.334508 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=68ab721a-33fe-4958-8d81-e309c1f915ec] 2026-02-05 01:22:18.452225 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=ae1da154-41f0-4ce9-9ce0-c9f0405f44ba] 2026-02-05 01:22:18.520001 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=5c5a8ea9-6229-4400-a5d1-2c55c34c99d8] 2026-02-05 01:22:18.554270 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=9da77e70-5b92-4e16-9c58-d97bdd90de91] 2026-02-05 01:22:18.663503 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=4267e4ce-08dc-4cb4-a88f-fd6ad0dad61e] 2026-02-05 01:22:19.027518 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=10d07b07-cb6e-4b58-8ce7-98dd3a5b3b09] 2026-02-05 01:22:19.044649 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=f1207ef6-8ec6-4e1e-ad96-28f7b2ad55e0] 2026-02-05 01:22:19.875973 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=70a1c9c5-89d5-4225-85b7-2e5235fd28ee] 2026-02-05 01:22:19.892580 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-05 01:22:19.919541 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-05 01:22:19.919612 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-05 01:22:19.922653 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-05 01:22:19.925858 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-05 01:22:19.933449 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-05 01:22:19.940125 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-05 01:22:21.371951 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=2c23fe1b-d37d-4f38-baab-04c1ff8b4d2f] 2026-02-05 01:22:21.381084 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-05 01:22:21.387346 | orchestrator | local_file.inventory: Creating... 2026-02-05 01:22:21.389537 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-05 01:22:21.392421 | orchestrator | local_file.inventory: Creation complete after 0s [id=7fe1032e4af1faa47e878ef64632d2a3308a0806] 2026-02-05 01:22:21.396352 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=8ddf6e54198db723747b7d014d07be84a0de935e] 2026-02-05 01:22:22.360816 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=2c23fe1b-d37d-4f38-baab-04c1ff8b4d2f] 2026-02-05 01:22:29.920088 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-05 01:22:29.922337 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-05 01:22:29.925670 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-05 01:22:29.932953 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-05 01:22:29.934153 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-05 01:22:29.941431 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-05 01:22:39.920334 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-05 01:22:39.922465 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-05 01:22:39.926847 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-05 01:22:39.933129 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-05 01:22:39.934302 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-05 01:22:39.941594 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-05 01:22:40.274665 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=f991328d-a0a8-4041-bd54-0288dcf6dd9a] 2026-02-05 01:22:40.384105 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=e5f65ab2-f3cc-44af-8612-43356467a34f] 2026-02-05 01:22:40.411051 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=2c8a371b-a048-45e6-9d94-b88996d94953] 2026-02-05 01:22:40.468147 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=34a62c96-8e8f-45d8-a08c-8fa8ab0ff452] 2026-02-05 01:22:49.922913 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-05 01:22:49.935258 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-05 01:22:50.601189 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=ae503124-737c-49c7-bb2c-652f9d347a4c] 2026-02-05 01:22:51.082670 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=ee7e2647-0f99-41e6-8093-f05ced016fce] 2026-02-05 01:22:51.216092 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-05 01:22:51.216163 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-05 01:22:51.216195 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-05 01:22:51.216202 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-05 01:22:51.216210 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-05 01:22:51.216217 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-05 01:22:51.216223 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-05 01:22:51.216231 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-05 01:22:51.216238 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-05 01:22:51.216245 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-05 01:22:51.216252 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=5416458222428505086] 2026-02-05 01:22:51.241090 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-05 01:22:54.480284 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=ee7e2647-0f99-41e6-8093-f05ced016fce/64f88b59-145a-4204-a5cc-35bb4626474a] 2026-02-05 01:22:54.508609 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=f991328d-a0a8-4041-bd54-0288dcf6dd9a/1b9ba281-c2e6-4817-9dab-91e9708a21dc] 2026-02-05 01:22:54.520936 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=2c8a371b-a048-45e6-9d94-b88996d94953/41a73991-c162-41f3-bbc6-bb80a44790ff] 2026-02-05 01:22:54.535006 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=ee7e2647-0f99-41e6-8093-f05ced016fce/46213c6d-7232-49e5-8bd8-8f24dba1e930] 2026-02-05 01:22:54.593712 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=2c8a371b-a048-45e6-9d94-b88996d94953/fbfcf598-94c5-41e4-b7a9-e869a71c977b] 2026-02-05 01:22:54.615129 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=f991328d-a0a8-4041-bd54-0288dcf6dd9a/e3013df6-5c5e-4503-84f9-a700edabdb49] 2026-02-05 01:23:00.692150 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=ee7e2647-0f99-41e6-8093-f05ced016fce/9d4195ed-cd70-4bda-970e-203e54c5de2a] 2026-02-05 01:23:00.722647 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=2c8a371b-a048-45e6-9d94-b88996d94953/67112651-7f80-4cd8-91f1-cb61626610a2] 2026-02-05 01:23:00.736996 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=f991328d-a0a8-4041-bd54-0288dcf6dd9a/93de9619-194c-45d0-9020-848f0c7631a9] 2026-02-05 01:23:01.242405 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-05 01:23:11.242678 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-05 01:23:12.569162 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 22s [id=f0327e7b-38a3-4834-aff1-1523d5d85055] 2026-02-05 01:23:12.583478 | orchestrator | 2026-02-05 01:23:12.583557 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-05 01:23:12.583567 | orchestrator | 2026-02-05 01:23:12.583574 | orchestrator | Outputs: 2026-02-05 01:23:12.583582 | orchestrator | 2026-02-05 01:23:12.583588 | orchestrator | manager_address = 2026-02-05 01:23:12.583595 | orchestrator | private_key = 2026-02-05 01:23:13.091420 | orchestrator | ok: Runtime: 0:01:08.620837 2026-02-05 01:23:13.128183 | 2026-02-05 01:23:13.128350 | TASK [Fetch manager address] 2026-02-05 01:23:13.583096 | orchestrator | ok 2026-02-05 01:23:13.593823 | 2026-02-05 01:23:13.593946 | TASK [Set manager_host address] 2026-02-05 01:23:13.664138 | orchestrator | ok 2026-02-05 01:23:13.674060 | 2026-02-05 01:23:13.674184 | LOOP [Update ansible collections] 2026-02-05 01:23:14.802418 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-05 01:23:14.802917 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-05 01:23:14.802980 | orchestrator | Starting galaxy collection install process 2026-02-05 01:23:14.803016 | orchestrator | Process install dependency map 2026-02-05 01:23:14.803050 | orchestrator | Starting collection install process 2026-02-05 01:23:14.803080 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-05 01:23:14.803114 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-05 01:23:14.803150 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-05 01:23:14.803227 | orchestrator | ok: Item: commons Runtime: 0:00:00.783867 2026-02-05 01:23:15.768628 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-05 01:23:15.768931 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-05 01:23:15.768988 | orchestrator | Starting galaxy collection install process 2026-02-05 01:23:15.769030 | orchestrator | Process install dependency map 2026-02-05 01:23:15.769088 | orchestrator | Starting collection install process 2026-02-05 01:23:15.769123 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-05 01:23:15.769157 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-05 01:23:15.769286 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-05 01:23:15.769344 | orchestrator | ok: Item: services Runtime: 0:00:00.662831 2026-02-05 01:23:15.789540 | 2026-02-05 01:23:15.789783 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-05 01:23:26.364198 | orchestrator | ok 2026-02-05 01:23:26.374364 | 2026-02-05 01:23:26.374486 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-05 01:24:26.422376 | orchestrator | ok 2026-02-05 01:24:26.434029 | 2026-02-05 01:24:26.434155 | TASK [Fetch manager ssh hostkey] 2026-02-05 01:24:28.020259 | orchestrator | Output suppressed because no_log was given 2026-02-05 01:24:28.035156 | 2026-02-05 01:24:28.035352 | TASK [Get ssh keypair from terraform environment] 2026-02-05 01:24:28.573630 | orchestrator | ok: Runtime: 0:00:00.007809 2026-02-05 01:24:28.588395 | 2026-02-05 01:24:28.588589 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-05 01:24:28.628530 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-05 01:24:28.639501 | 2026-02-05 01:24:28.639689 | TASK [Run manager part 0] 2026-02-05 01:24:29.820763 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-05 01:24:29.958863 | orchestrator | 2026-02-05 01:24:29.958916 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-05 01:24:29.958923 | orchestrator | 2026-02-05 01:24:29.958938 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-05 01:24:31.650875 | orchestrator | ok: [testbed-manager] 2026-02-05 01:24:31.650937 | orchestrator | 2026-02-05 01:24:31.650962 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-05 01:24:31.650975 | orchestrator | 2026-02-05 01:24:31.650987 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 01:24:33.576583 | orchestrator | ok: [testbed-manager] 2026-02-05 01:24:33.576705 | orchestrator | 2026-02-05 01:24:33.576715 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-05 01:24:34.242696 | orchestrator | ok: [testbed-manager] 2026-02-05 01:24:34.242753 | orchestrator | 2026-02-05 01:24:34.242765 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-05 01:24:34.292322 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:24:34.292370 | orchestrator | 2026-02-05 01:24:34.292380 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-05 01:24:34.324898 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:24:34.324955 | orchestrator | 2026-02-05 01:24:34.324966 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-05 01:24:34.358573 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:24:34.358631 | orchestrator | 2026-02-05 01:24:34.358640 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-05 01:24:34.393064 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:24:34.393135 | orchestrator | 2026-02-05 01:24:34.393145 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-05 01:24:34.422504 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:24:34.422574 | orchestrator | 2026-02-05 01:24:34.422587 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-05 01:24:34.454982 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:24:34.455039 | orchestrator | 2026-02-05 01:24:34.455048 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-05 01:24:34.484878 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:24:34.484935 | orchestrator | 2026-02-05 01:24:34.484942 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-05 01:24:35.225659 | orchestrator | changed: [testbed-manager] 2026-02-05 01:24:35.225708 | orchestrator | 2026-02-05 01:24:35.225714 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-05 01:26:48.945124 | orchestrator | changed: [testbed-manager] 2026-02-05 01:26:48.945192 | orchestrator | 2026-02-05 01:26:48.945250 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-05 01:28:31.553288 | orchestrator | changed: [testbed-manager] 2026-02-05 01:28:31.553380 | orchestrator | 2026-02-05 01:28:31.553396 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-05 01:28:54.021930 | orchestrator | changed: [testbed-manager] 2026-02-05 01:28:54.022008 | orchestrator | 2026-02-05 01:28:54.022051 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-05 01:29:02.552755 | orchestrator | changed: [testbed-manager] 2026-02-05 01:29:02.552847 | orchestrator | 2026-02-05 01:29:02.552866 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-05 01:29:02.601066 | orchestrator | ok: [testbed-manager] 2026-02-05 01:29:02.601126 | orchestrator | 2026-02-05 01:29:02.601162 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-05 01:29:03.379164 | orchestrator | ok: [testbed-manager] 2026-02-05 01:29:03.379217 | orchestrator | 2026-02-05 01:29:03.379227 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-05 01:29:04.087985 | orchestrator | changed: [testbed-manager] 2026-02-05 01:29:04.088056 | orchestrator | 2026-02-05 01:29:04.088066 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-05 01:29:10.336527 | orchestrator | changed: [testbed-manager] 2026-02-05 01:29:10.336609 | orchestrator | 2026-02-05 01:29:10.336637 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-05 01:29:16.146957 | orchestrator | changed: [testbed-manager] 2026-02-05 01:29:16.147036 | orchestrator | 2026-02-05 01:29:16.147050 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-05 01:29:18.686331 | orchestrator | changed: [testbed-manager] 2026-02-05 01:29:18.686416 | orchestrator | 2026-02-05 01:29:18.686433 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-05 01:29:20.405327 | orchestrator | changed: [testbed-manager] 2026-02-05 01:29:20.405389 | orchestrator | 2026-02-05 01:29:20.405397 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-05 01:29:21.458286 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-05 01:29:21.458342 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-05 01:29:21.458350 | orchestrator | 2026-02-05 01:29:21.458357 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-05 01:29:21.506976 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-05 01:29:21.507087 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-05 01:29:21.507111 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-05 01:29:21.507176 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-05 01:29:26.125790 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-05 01:29:26.125854 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-05 01:29:26.125859 | orchestrator | 2026-02-05 01:29:26.125865 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-05 01:29:26.672391 | orchestrator | changed: [testbed-manager] 2026-02-05 01:29:26.672466 | orchestrator | 2026-02-05 01:29:26.672476 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-05 01:30:54.020108 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-05 01:30:54.020165 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-05 01:30:54.020178 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-05 01:30:54.020187 | orchestrator | 2026-02-05 01:30:54.020197 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-05 01:30:56.283739 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-05 01:30:56.283845 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-05 01:30:56.283866 | orchestrator | 2026-02-05 01:30:56.283883 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-05 01:30:56.283898 | orchestrator | 2026-02-05 01:30:56.283912 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 01:30:57.664672 | orchestrator | ok: [testbed-manager] 2026-02-05 01:30:57.664707 | orchestrator | 2026-02-05 01:30:57.664714 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-05 01:30:57.706399 | orchestrator | ok: [testbed-manager] 2026-02-05 01:30:57.706436 | orchestrator | 2026-02-05 01:30:57.706444 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-05 01:30:57.779950 | orchestrator | ok: [testbed-manager] 2026-02-05 01:30:57.779988 | orchestrator | 2026-02-05 01:30:57.779995 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-05 01:30:58.581700 | orchestrator | changed: [testbed-manager] 2026-02-05 01:30:58.581789 | orchestrator | 2026-02-05 01:30:58.581806 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-05 01:30:59.272940 | orchestrator | changed: [testbed-manager] 2026-02-05 01:30:59.273016 | orchestrator | 2026-02-05 01:30:59.273026 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-05 01:31:00.524327 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-05 01:31:00.524412 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-05 01:31:00.524427 | orchestrator | 2026-02-05 01:31:00.524455 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-05 01:31:01.866690 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:01.866794 | orchestrator | 2026-02-05 01:31:01.866810 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-05 01:31:03.523654 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 01:31:03.523712 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-05 01:31:03.523722 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-05 01:31:03.523729 | orchestrator | 2026-02-05 01:31:03.523737 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-05 01:31:03.582172 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:31:03.582249 | orchestrator | 2026-02-05 01:31:03.582260 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-05 01:31:03.668967 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:31:03.669038 | orchestrator | 2026-02-05 01:31:03.669050 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-05 01:31:04.192394 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:04.192462 | orchestrator | 2026-02-05 01:31:04.192472 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-05 01:31:04.262800 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:31:04.262839 | orchestrator | 2026-02-05 01:31:04.262846 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-05 01:31:05.121466 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 01:31:05.121597 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:05.121611 | orchestrator | 2026-02-05 01:31:05.121620 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-05 01:31:05.152550 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:31:05.152619 | orchestrator | 2026-02-05 01:31:05.152626 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-05 01:31:05.182790 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:31:05.182884 | orchestrator | 2026-02-05 01:31:05.182901 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-05 01:31:05.211124 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:31:05.211189 | orchestrator | 2026-02-05 01:31:05.211198 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-05 01:31:05.276386 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:31:05.276436 | orchestrator | 2026-02-05 01:31:05.276444 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-05 01:31:05.955909 | orchestrator | ok: [testbed-manager] 2026-02-05 01:31:05.956005 | orchestrator | 2026-02-05 01:31:05.956021 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-05 01:31:05.956034 | orchestrator | 2026-02-05 01:31:05.956045 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 01:31:07.292606 | orchestrator | ok: [testbed-manager] 2026-02-05 01:31:07.292677 | orchestrator | 2026-02-05 01:31:07.292686 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-05 01:31:08.224904 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:08.224973 | orchestrator | 2026-02-05 01:31:08.224983 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:31:08.224992 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-05 01:31:08.224999 | orchestrator | 2026-02-05 01:31:08.424267 | orchestrator | ok: Runtime: 0:06:39.402263 2026-02-05 01:31:08.443408 | 2026-02-05 01:31:08.443564 | TASK [Point out that the log in on the manager is now possible] 2026-02-05 01:31:08.485341 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-05 01:31:08.495153 | 2026-02-05 01:31:08.495344 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-05 01:31:08.528418 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-05 01:31:08.536268 | 2026-02-05 01:31:08.536375 | TASK [Run manager part 1 + 2] 2026-02-05 01:31:09.449926 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-05 01:31:09.508267 | orchestrator | 2026-02-05 01:31:09.508321 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-05 01:31:09.508329 | orchestrator | 2026-02-05 01:31:09.508342 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 01:31:12.479338 | orchestrator | ok: [testbed-manager] 2026-02-05 01:31:12.479394 | orchestrator | 2026-02-05 01:31:12.479419 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-05 01:31:12.516192 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:31:12.516368 | orchestrator | 2026-02-05 01:31:12.516378 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-05 01:31:12.565630 | orchestrator | ok: [testbed-manager] 2026-02-05 01:31:12.565823 | orchestrator | 2026-02-05 01:31:12.565840 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-05 01:31:12.610870 | orchestrator | ok: [testbed-manager] 2026-02-05 01:31:12.610918 | orchestrator | 2026-02-05 01:31:12.610925 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-05 01:31:12.689897 | orchestrator | ok: [testbed-manager] 2026-02-05 01:31:12.689969 | orchestrator | 2026-02-05 01:31:12.689981 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-05 01:31:12.770202 | orchestrator | ok: [testbed-manager] 2026-02-05 01:31:12.770254 | orchestrator | 2026-02-05 01:31:12.770263 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-05 01:31:12.830251 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-05 01:31:12.830297 | orchestrator | 2026-02-05 01:31:12.830303 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-05 01:31:13.550251 | orchestrator | ok: [testbed-manager] 2026-02-05 01:31:13.550330 | orchestrator | 2026-02-05 01:31:13.550348 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-05 01:31:13.601642 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:31:13.601698 | orchestrator | 2026-02-05 01:31:13.601705 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-05 01:31:14.981228 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:14.981284 | orchestrator | 2026-02-05 01:31:14.981293 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-05 01:31:15.548779 | orchestrator | ok: [testbed-manager] 2026-02-05 01:31:15.548836 | orchestrator | 2026-02-05 01:31:15.548848 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-05 01:31:16.663620 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:16.663696 | orchestrator | 2026-02-05 01:31:16.663712 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-05 01:31:31.475521 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:31.475611 | orchestrator | 2026-02-05 01:31:31.475626 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-05 01:31:32.164319 | orchestrator | ok: [testbed-manager] 2026-02-05 01:31:32.164749 | orchestrator | 2026-02-05 01:31:32.164792 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-05 01:31:32.220547 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:31:32.220647 | orchestrator | 2026-02-05 01:31:32.220658 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-05 01:31:33.141023 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:33.141166 | orchestrator | 2026-02-05 01:31:33.141193 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-05 01:31:34.113136 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:34.113171 | orchestrator | 2026-02-05 01:31:34.113177 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-05 01:31:34.657651 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:34.657725 | orchestrator | 2026-02-05 01:31:34.657742 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-05 01:31:34.694572 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-05 01:31:34.694746 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-05 01:31:34.694765 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-05 01:31:34.694778 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-05 01:31:36.730762 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:36.730829 | orchestrator | 2026-02-05 01:31:36.730839 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-05 01:31:45.604887 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-05 01:31:45.604980 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-05 01:31:45.604995 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-05 01:31:45.605006 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-05 01:31:45.605024 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-05 01:31:45.605034 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-05 01:31:45.605043 | orchestrator | 2026-02-05 01:31:45.605053 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-05 01:31:46.669523 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:46.669570 | orchestrator | 2026-02-05 01:31:46.669578 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-05 01:31:46.707324 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:31:46.707403 | orchestrator | 2026-02-05 01:31:46.707420 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-05 01:31:49.738680 | orchestrator | changed: [testbed-manager] 2026-02-05 01:31:49.738720 | orchestrator | 2026-02-05 01:31:49.738726 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-05 01:31:49.776829 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:31:49.776865 | orchestrator | 2026-02-05 01:31:49.776871 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-05 01:33:18.884129 | orchestrator | changed: [testbed-manager] 2026-02-05 01:33:18.884166 | orchestrator | 2026-02-05 01:33:18.884173 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-05 01:33:19.979017 | orchestrator | ok: [testbed-manager] 2026-02-05 01:33:19.979113 | orchestrator | 2026-02-05 01:33:19.979132 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:33:19.979146 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-05 01:33:19.979157 | orchestrator | 2026-02-05 01:33:20.168764 | orchestrator | ok: Runtime: 0:02:11.248327 2026-02-05 01:33:20.179292 | 2026-02-05 01:33:20.179401 | TASK [Reboot manager] 2026-02-05 01:33:21.714560 | orchestrator | ok: Runtime: 0:00:00.952073 2026-02-05 01:33:21.732633 | 2026-02-05 01:33:21.732792 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-05 01:33:35.465556 | orchestrator | ok 2026-02-05 01:33:35.475093 | 2026-02-05 01:33:35.475208 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-05 01:34:35.519424 | orchestrator | ok 2026-02-05 01:34:35.528424 | 2026-02-05 01:34:35.528548 | TASK [Deploy manager + bootstrap nodes] 2026-02-05 01:34:37.927548 | orchestrator | 2026-02-05 01:34:37.927822 | orchestrator | # DEPLOY MANAGER 2026-02-05 01:34:37.927854 | orchestrator | 2026-02-05 01:34:37.927872 | orchestrator | + set -e 2026-02-05 01:34:37.927888 | orchestrator | + echo 2026-02-05 01:34:37.927905 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-05 01:34:37.927926 | orchestrator | + echo 2026-02-05 01:34:37.927978 | orchestrator | + cat /opt/manager-vars.sh 2026-02-05 01:34:37.930887 | orchestrator | export NUMBER_OF_NODES=6 2026-02-05 01:34:37.930978 | orchestrator | 2026-02-05 01:34:37.930991 | orchestrator | export CEPH_VERSION=reef 2026-02-05 01:34:37.931002 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-05 01:34:37.931011 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-05 01:34:37.931032 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-05 01:34:37.931041 | orchestrator | 2026-02-05 01:34:37.931055 | orchestrator | export ARA=false 2026-02-05 01:34:37.931063 | orchestrator | export DEPLOY_MODE=manager 2026-02-05 01:34:37.931077 | orchestrator | export TEMPEST=false 2026-02-05 01:34:37.931085 | orchestrator | export IS_ZUUL=true 2026-02-05 01:34:37.931093 | orchestrator | 2026-02-05 01:34:37.931127 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 01:34:37.931141 | orchestrator | export EXTERNAL_API=false 2026-02-05 01:34:37.931149 | orchestrator | 2026-02-05 01:34:37.931157 | orchestrator | export IMAGE_USER=ubuntu 2026-02-05 01:34:37.931168 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-05 01:34:37.931176 | orchestrator | 2026-02-05 01:34:37.931184 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-05 01:34:37.931201 | orchestrator | 2026-02-05 01:34:37.931210 | orchestrator | + echo 2026-02-05 01:34:37.931222 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 01:34:37.932051 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 01:34:37.932067 | orchestrator | ++ INTERACTIVE=false 2026-02-05 01:34:37.932077 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 01:34:37.932087 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 01:34:37.932189 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 01:34:37.932328 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 01:34:37.932340 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 01:34:37.932348 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 01:34:37.932356 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 01:34:37.932364 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 01:34:37.932373 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 01:34:37.932381 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 01:34:37.932389 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 01:34:37.932520 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 01:34:37.932549 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 01:34:37.932562 | orchestrator | ++ export ARA=false 2026-02-05 01:34:37.932575 | orchestrator | ++ ARA=false 2026-02-05 01:34:37.932588 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 01:34:37.932612 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 01:34:37.932637 | orchestrator | ++ export TEMPEST=false 2026-02-05 01:34:37.932664 | orchestrator | ++ TEMPEST=false 2026-02-05 01:34:37.932692 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 01:34:37.932720 | orchestrator | ++ IS_ZUUL=true 2026-02-05 01:34:37.932748 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 01:34:37.932778 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 01:34:37.932809 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 01:34:37.932842 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 01:34:37.932862 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 01:34:37.932881 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 01:34:37.932901 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 01:34:37.932919 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 01:34:37.932940 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 01:34:37.932959 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 01:34:37.932979 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-05 01:34:37.984566 | orchestrator | + docker version 2026-02-05 01:34:38.091354 | orchestrator | Client: Docker Engine - Community 2026-02-05 01:34:38.091465 | orchestrator | Version: 27.5.1 2026-02-05 01:34:38.091482 | orchestrator | API version: 1.47 2026-02-05 01:34:38.091492 | orchestrator | Go version: go1.22.11 2026-02-05 01:34:38.091502 | orchestrator | Git commit: 9f9e405 2026-02-05 01:34:38.091511 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-05 01:34:38.091523 | orchestrator | OS/Arch: linux/amd64 2026-02-05 01:34:38.091533 | orchestrator | Context: default 2026-02-05 01:34:38.091542 | orchestrator | 2026-02-05 01:34:38.091552 | orchestrator | Server: Docker Engine - Community 2026-02-05 01:34:38.091563 | orchestrator | Engine: 2026-02-05 01:34:38.091573 | orchestrator | Version: 27.5.1 2026-02-05 01:34:38.091583 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-05 01:34:38.091623 | orchestrator | Go version: go1.22.11 2026-02-05 01:34:38.091634 | orchestrator | Git commit: 4c9b3b0 2026-02-05 01:34:38.091643 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-05 01:34:38.091653 | orchestrator | OS/Arch: linux/amd64 2026-02-05 01:34:38.091662 | orchestrator | Experimental: false 2026-02-05 01:34:38.091671 | orchestrator | containerd: 2026-02-05 01:34:38.091681 | orchestrator | Version: v2.2.1 2026-02-05 01:34:38.091691 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-05 01:34:38.091701 | orchestrator | runc: 2026-02-05 01:34:38.091755 | orchestrator | Version: 1.3.4 2026-02-05 01:34:38.091774 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-05 01:34:38.091784 | orchestrator | docker-init: 2026-02-05 01:34:38.091793 | orchestrator | Version: 0.19.0 2026-02-05 01:34:38.091803 | orchestrator | GitCommit: de40ad0 2026-02-05 01:34:38.095060 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-05 01:34:38.103597 | orchestrator | + set -e 2026-02-05 01:34:38.103735 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 01:34:38.103751 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 01:34:38.103763 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 01:34:38.103773 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 01:34:38.103783 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 01:34:38.103793 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 01:34:38.103805 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 01:34:38.103815 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 01:34:38.103836 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 01:34:38.103857 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 01:34:38.103868 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 01:34:38.103885 | orchestrator | ++ export ARA=false 2026-02-05 01:34:38.103904 | orchestrator | ++ ARA=false 2026-02-05 01:34:38.103921 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 01:34:38.103938 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 01:34:38.103955 | orchestrator | ++ export TEMPEST=false 2026-02-05 01:34:38.103972 | orchestrator | ++ TEMPEST=false 2026-02-05 01:34:38.104085 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 01:34:38.104151 | orchestrator | ++ IS_ZUUL=true 2026-02-05 01:34:38.104169 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 01:34:38.104194 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 01:34:38.104212 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 01:34:38.104228 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 01:34:38.104244 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 01:34:38.104260 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 01:34:38.104277 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 01:34:38.104293 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 01:34:38.104309 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 01:34:38.104322 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 01:34:38.104337 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 01:34:38.104354 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 01:34:38.104370 | orchestrator | ++ INTERACTIVE=false 2026-02-05 01:34:38.104387 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 01:34:38.104409 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 01:34:38.104428 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-05 01:34:38.104440 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-05 01:34:38.109653 | orchestrator | + set -e 2026-02-05 01:34:38.109736 | orchestrator | + VERSION=9.5.0 2026-02-05 01:34:38.109755 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-05 01:34:38.117487 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-05 01:34:38.117591 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-05 01:34:38.121368 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-05 01:34:38.124979 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-05 01:34:38.131362 | orchestrator | /opt/configuration ~ 2026-02-05 01:34:38.131422 | orchestrator | + set -e 2026-02-05 01:34:38.131434 | orchestrator | + pushd /opt/configuration 2026-02-05 01:34:38.131444 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 01:34:38.132587 | orchestrator | + source /opt/venv/bin/activate 2026-02-05 01:34:38.133629 | orchestrator | ++ deactivate nondestructive 2026-02-05 01:34:38.133672 | orchestrator | ++ '[' -n '' ']' 2026-02-05 01:34:38.133686 | orchestrator | ++ '[' -n '' ']' 2026-02-05 01:34:38.133721 | orchestrator | ++ hash -r 2026-02-05 01:34:38.133731 | orchestrator | ++ '[' -n '' ']' 2026-02-05 01:34:38.133740 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-05 01:34:38.133749 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-05 01:34:38.133759 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-05 01:34:38.133769 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-05 01:34:38.133779 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-05 01:34:38.133789 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-05 01:34:38.133798 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-05 01:34:38.133814 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 01:34:38.133821 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 01:34:38.133826 | orchestrator | ++ export PATH 2026-02-05 01:34:38.133832 | orchestrator | ++ '[' -n '' ']' 2026-02-05 01:34:38.133838 | orchestrator | ++ '[' -z '' ']' 2026-02-05 01:34:38.133843 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-05 01:34:38.133849 | orchestrator | ++ PS1='(venv) ' 2026-02-05 01:34:38.133854 | orchestrator | ++ export PS1 2026-02-05 01:34:38.133860 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-05 01:34:38.133865 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-05 01:34:38.133871 | orchestrator | ++ hash -r 2026-02-05 01:34:38.133877 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-05 01:34:39.132022 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-05 01:34:39.132924 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-05 01:34:39.134358 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-05 01:34:39.135653 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-05 01:34:39.136994 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-05 01:34:39.146852 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-05 01:34:39.148200 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-05 01:34:39.149227 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-05 01:34:39.150621 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-05 01:34:39.182154 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-05 01:34:39.183284 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-05 01:34:39.185243 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-05 01:34:39.186545 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-05 01:34:39.190438 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-05 01:34:39.393248 | orchestrator | ++ which gilt 2026-02-05 01:34:39.397635 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-05 01:34:39.397736 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-05 01:34:39.601697 | orchestrator | osism.cfg-generics: 2026-02-05 01:34:39.738936 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-05 01:34:39.739421 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-05 01:34:39.740054 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-05 01:34:39.740109 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-05 01:34:40.302580 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-05 01:34:40.314877 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-05 01:34:40.618619 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-05 01:34:40.668997 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 01:34:40.669073 | orchestrator | + deactivate 2026-02-05 01:34:40.669081 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-05 01:34:40.669088 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 01:34:40.669093 | orchestrator | + export PATH 2026-02-05 01:34:40.669099 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-05 01:34:40.669104 | orchestrator | + '[' -n '' ']' 2026-02-05 01:34:40.669111 | orchestrator | + hash -r 2026-02-05 01:34:40.669135 | orchestrator | + '[' -n '' ']' 2026-02-05 01:34:40.669142 | orchestrator | + unset VIRTUAL_ENV 2026-02-05 01:34:40.669148 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-05 01:34:40.669155 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-05 01:34:40.669162 | orchestrator | + unset -f deactivate 2026-02-05 01:34:40.669178 | orchestrator | ~ 2026-02-05 01:34:40.669184 | orchestrator | + popd 2026-02-05 01:34:40.670938 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-05 01:34:40.670966 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-05 01:34:40.671454 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-05 01:34:40.722209 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 01:34:40.722330 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-05 01:34:40.722995 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-05 01:34:40.775002 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 01:34:40.775758 | orchestrator | ++ semver 2024.2 2025.1 2026-02-05 01:34:40.826758 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 01:34:40.826888 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-05 01:34:40.918511 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 01:34:40.918589 | orchestrator | + source /opt/venv/bin/activate 2026-02-05 01:34:40.918596 | orchestrator | ++ deactivate nondestructive 2026-02-05 01:34:40.918602 | orchestrator | ++ '[' -n '' ']' 2026-02-05 01:34:40.918606 | orchestrator | ++ '[' -n '' ']' 2026-02-05 01:34:40.918618 | orchestrator | ++ hash -r 2026-02-05 01:34:40.918623 | orchestrator | ++ '[' -n '' ']' 2026-02-05 01:34:40.918627 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-05 01:34:40.918632 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-05 01:34:40.918636 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-05 01:34:40.918880 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-05 01:34:40.918911 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-05 01:34:40.918918 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-05 01:34:40.918923 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-05 01:34:40.918931 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 01:34:40.919049 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 01:34:40.919056 | orchestrator | ++ export PATH 2026-02-05 01:34:40.919065 | orchestrator | ++ '[' -n '' ']' 2026-02-05 01:34:40.919263 | orchestrator | ++ '[' -z '' ']' 2026-02-05 01:34:40.919306 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-05 01:34:40.919339 | orchestrator | ++ PS1='(venv) ' 2026-02-05 01:34:40.919348 | orchestrator | ++ export PS1 2026-02-05 01:34:40.919354 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-05 01:34:40.919361 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-05 01:34:40.919424 | orchestrator | ++ hash -r 2026-02-05 01:34:40.919561 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-05 01:34:41.935431 | orchestrator | 2026-02-05 01:34:41.935521 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-05 01:34:41.935530 | orchestrator | 2026-02-05 01:34:41.935537 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-05 01:34:42.493170 | orchestrator | ok: [testbed-manager] 2026-02-05 01:34:42.493250 | orchestrator | 2026-02-05 01:34:42.493256 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-05 01:34:43.455479 | orchestrator | changed: [testbed-manager] 2026-02-05 01:34:43.455612 | orchestrator | 2026-02-05 01:34:43.455625 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-05 01:34:43.455661 | orchestrator | 2026-02-05 01:34:43.455667 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 01:34:45.584848 | orchestrator | ok: [testbed-manager] 2026-02-05 01:34:45.584936 | orchestrator | 2026-02-05 01:34:45.584944 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-05 01:34:45.639951 | orchestrator | ok: [testbed-manager] 2026-02-05 01:34:45.640121 | orchestrator | 2026-02-05 01:34:45.640181 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-05 01:34:46.087943 | orchestrator | changed: [testbed-manager] 2026-02-05 01:34:46.088044 | orchestrator | 2026-02-05 01:34:46.088060 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-05 01:34:46.125187 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:34:46.125262 | orchestrator | 2026-02-05 01:34:46.125273 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-05 01:34:46.445679 | orchestrator | changed: [testbed-manager] 2026-02-05 01:34:46.445774 | orchestrator | 2026-02-05 01:34:46.445788 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-05 01:34:46.765454 | orchestrator | ok: [testbed-manager] 2026-02-05 01:34:46.765537 | orchestrator | 2026-02-05 01:34:46.765548 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-05 01:34:46.879829 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:34:46.879916 | orchestrator | 2026-02-05 01:34:46.879925 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-05 01:34:46.879931 | orchestrator | 2026-02-05 01:34:46.879935 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 01:34:49.596400 | orchestrator | ok: [testbed-manager] 2026-02-05 01:34:49.596502 | orchestrator | 2026-02-05 01:34:49.596518 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-05 01:34:49.699345 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-05 01:34:49.699446 | orchestrator | 2026-02-05 01:34:49.699456 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-05 01:34:49.752475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-05 01:34:49.752602 | orchestrator | 2026-02-05 01:34:49.752628 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-05 01:34:50.808537 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-05 01:34:50.808686 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-05 01:34:50.808706 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-05 01:34:50.808719 | orchestrator | 2026-02-05 01:34:50.808734 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-05 01:34:52.542636 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-05 01:34:52.542717 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-05 01:34:52.542726 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-05 01:34:52.542734 | orchestrator | 2026-02-05 01:34:52.542741 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-05 01:34:53.147002 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 01:34:53.147135 | orchestrator | changed: [testbed-manager] 2026-02-05 01:34:53.147225 | orchestrator | 2026-02-05 01:34:53.147248 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-05 01:34:53.745287 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 01:34:53.745378 | orchestrator | changed: [testbed-manager] 2026-02-05 01:34:53.745390 | orchestrator | 2026-02-05 01:34:53.745398 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-05 01:34:53.796937 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:34:53.797024 | orchestrator | 2026-02-05 01:34:53.797033 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-05 01:34:54.146103 | orchestrator | ok: [testbed-manager] 2026-02-05 01:34:54.146236 | orchestrator | 2026-02-05 01:34:54.146254 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-05 01:34:54.219398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-05 01:34:54.219477 | orchestrator | 2026-02-05 01:34:54.219486 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-05 01:34:55.268114 | orchestrator | changed: [testbed-manager] 2026-02-05 01:34:55.268246 | orchestrator | 2026-02-05 01:34:55.268257 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-05 01:34:56.073991 | orchestrator | changed: [testbed-manager] 2026-02-05 01:34:56.074145 | orchestrator | 2026-02-05 01:34:56.074210 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-05 01:35:06.212446 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:06.212557 | orchestrator | 2026-02-05 01:35:06.212573 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-05 01:35:06.259854 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:35:06.259959 | orchestrator | 2026-02-05 01:35:06.259992 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-05 01:35:06.260001 | orchestrator | 2026-02-05 01:35:06.260008 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 01:35:08.070011 | orchestrator | ok: [testbed-manager] 2026-02-05 01:35:08.070116 | orchestrator | 2026-02-05 01:35:08.070127 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-05 01:35:08.182011 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-05 01:35:08.182171 | orchestrator | 2026-02-05 01:35:08.182239 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-05 01:35:08.250425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 01:35:08.250495 | orchestrator | 2026-02-05 01:35:08.250508 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-05 01:35:10.587620 | orchestrator | ok: [testbed-manager] 2026-02-05 01:35:10.587723 | orchestrator | 2026-02-05 01:35:10.587740 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-05 01:35:10.643941 | orchestrator | ok: [testbed-manager] 2026-02-05 01:35:10.644047 | orchestrator | 2026-02-05 01:35:10.644062 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-05 01:35:10.766356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-05 01:35:10.766438 | orchestrator | 2026-02-05 01:35:10.766448 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-05 01:35:13.584399 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-05 01:35:13.584517 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-05 01:35:13.584533 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-05 01:35:13.584547 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-05 01:35:13.584558 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-05 01:35:13.584569 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-05 01:35:13.584581 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-05 01:35:13.584592 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-05 01:35:13.584603 | orchestrator | 2026-02-05 01:35:13.584616 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-05 01:35:14.213308 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:14.213386 | orchestrator | 2026-02-05 01:35:14.213395 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-05 01:35:14.829629 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:14.829743 | orchestrator | 2026-02-05 01:35:14.829767 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-05 01:35:14.910476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-05 01:35:14.910592 | orchestrator | 2026-02-05 01:35:14.910623 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-05 01:35:16.101688 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-05 01:35:16.101765 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-05 01:35:16.101773 | orchestrator | 2026-02-05 01:35:16.101779 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-05 01:35:16.714550 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:16.714641 | orchestrator | 2026-02-05 01:35:16.714652 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-05 01:35:16.773395 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:35:16.773482 | orchestrator | 2026-02-05 01:35:16.773495 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-05 01:35:16.843333 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-05 01:35:16.843420 | orchestrator | 2026-02-05 01:35:16.843430 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-05 01:35:17.443475 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:17.443571 | orchestrator | 2026-02-05 01:35:17.443582 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-05 01:35:17.503368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-05 01:35:17.503456 | orchestrator | 2026-02-05 01:35:17.503468 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-05 01:35:18.858275 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 01:35:18.858367 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 01:35:18.858379 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:18.858388 | orchestrator | 2026-02-05 01:35:18.858397 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-05 01:35:19.481186 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:19.481374 | orchestrator | 2026-02-05 01:35:19.481387 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-05 01:35:19.539418 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:35:19.539483 | orchestrator | 2026-02-05 01:35:19.539493 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-05 01:35:19.640928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-05 01:35:19.641023 | orchestrator | 2026-02-05 01:35:19.641039 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-05 01:35:20.164062 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:20.164160 | orchestrator | 2026-02-05 01:35:20.164174 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-05 01:35:20.571621 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:20.571755 | orchestrator | 2026-02-05 01:35:20.571784 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-05 01:35:21.717655 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-05 01:35:21.717771 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-05 01:35:21.717796 | orchestrator | 2026-02-05 01:35:21.717818 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-05 01:35:22.371060 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:22.371127 | orchestrator | 2026-02-05 01:35:22.371137 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-05 01:35:22.729331 | orchestrator | ok: [testbed-manager] 2026-02-05 01:35:22.729428 | orchestrator | 2026-02-05 01:35:22.729442 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-05 01:35:23.111732 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:23.111836 | orchestrator | 2026-02-05 01:35:23.111853 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-05 01:35:23.161003 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:35:23.161099 | orchestrator | 2026-02-05 01:35:23.161113 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-05 01:35:23.238749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-05 01:35:23.238881 | orchestrator | 2026-02-05 01:35:23.238896 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-05 01:35:23.291675 | orchestrator | ok: [testbed-manager] 2026-02-05 01:35:23.291793 | orchestrator | 2026-02-05 01:35:23.291812 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-05 01:35:25.241176 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-05 01:35:25.241284 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-05 01:35:25.241296 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-05 01:35:25.241303 | orchestrator | 2026-02-05 01:35:25.241312 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-05 01:35:25.932197 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:25.932296 | orchestrator | 2026-02-05 01:35:25.932305 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-05 01:35:26.629489 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:26.629583 | orchestrator | 2026-02-05 01:35:26.629596 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-05 01:35:27.324897 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:27.324988 | orchestrator | 2026-02-05 01:35:27.325001 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-05 01:35:27.400712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-05 01:35:27.400800 | orchestrator | 2026-02-05 01:35:27.400816 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-05 01:35:27.443871 | orchestrator | ok: [testbed-manager] 2026-02-05 01:35:27.443952 | orchestrator | 2026-02-05 01:35:27.443962 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-05 01:35:28.125623 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-05 01:35:28.125692 | orchestrator | 2026-02-05 01:35:28.125699 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-05 01:35:28.203488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-05 01:35:28.203586 | orchestrator | 2026-02-05 01:35:28.203601 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-05 01:35:28.896517 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:28.896613 | orchestrator | 2026-02-05 01:35:28.896625 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-05 01:35:29.470073 | orchestrator | ok: [testbed-manager] 2026-02-05 01:35:29.470152 | orchestrator | 2026-02-05 01:35:29.470162 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-05 01:35:29.525450 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:35:29.525518 | orchestrator | 2026-02-05 01:35:29.525525 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-05 01:35:29.587311 | orchestrator | ok: [testbed-manager] 2026-02-05 01:35:29.587410 | orchestrator | 2026-02-05 01:35:29.587426 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-05 01:35:30.357081 | orchestrator | changed: [testbed-manager] 2026-02-05 01:35:30.357190 | orchestrator | 2026-02-05 01:35:30.357208 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-05 01:36:36.200951 | orchestrator | changed: [testbed-manager] 2026-02-05 01:36:36.201065 | orchestrator | 2026-02-05 01:36:36.201081 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-05 01:36:37.174617 | orchestrator | ok: [testbed-manager] 2026-02-05 01:36:37.174739 | orchestrator | 2026-02-05 01:36:37.174757 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-05 01:36:37.232126 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:36:37.232227 | orchestrator | 2026-02-05 01:36:37.232244 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-05 01:36:43.086641 | orchestrator | changed: [testbed-manager] 2026-02-05 01:36:43.086743 | orchestrator | 2026-02-05 01:36:43.086758 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-05 01:36:43.134278 | orchestrator | ok: [testbed-manager] 2026-02-05 01:36:43.134357 | orchestrator | 2026-02-05 01:36:43.134367 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-05 01:36:43.134374 | orchestrator | 2026-02-05 01:36:43.134380 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-05 01:36:43.275813 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:36:43.275893 | orchestrator | 2026-02-05 01:36:43.275903 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-05 01:37:43.333259 | orchestrator | Pausing for 60 seconds 2026-02-05 01:37:43.333372 | orchestrator | changed: [testbed-manager] 2026-02-05 01:37:43.333390 | orchestrator | 2026-02-05 01:37:43.334574 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-05 01:37:46.347288 | orchestrator | changed: [testbed-manager] 2026-02-05 01:37:46.347421 | orchestrator | 2026-02-05 01:37:46.347450 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-05 01:38:27.845086 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-05 01:38:27.845218 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-05 01:38:27.845237 | orchestrator | changed: [testbed-manager] 2026-02-05 01:38:27.845251 | orchestrator | 2026-02-05 01:38:27.845283 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-05 01:38:37.843840 | orchestrator | changed: [testbed-manager] 2026-02-05 01:38:37.843974 | orchestrator | 2026-02-05 01:38:37.843999 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-05 01:38:37.935278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-05 01:38:37.935353 | orchestrator | 2026-02-05 01:38:37.935361 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-05 01:38:37.935367 | orchestrator | 2026-02-05 01:38:37.935372 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-05 01:38:37.987805 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:38:37.987912 | orchestrator | 2026-02-05 01:38:37.987933 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-05 01:38:38.055069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-05 01:38:38.055189 | orchestrator | 2026-02-05 01:38:38.055209 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-05 01:38:38.810748 | orchestrator | changed: [testbed-manager] 2026-02-05 01:38:38.810864 | orchestrator | 2026-02-05 01:38:38.810885 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-05 01:38:42.018376 | orchestrator | ok: [testbed-manager] 2026-02-05 01:38:42.018484 | orchestrator | 2026-02-05 01:38:42.018498 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-05 01:38:42.091357 | orchestrator | ok: [testbed-manager] => { 2026-02-05 01:38:42.091442 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-05 01:38:42.091451 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-05 01:38:42.091457 | orchestrator | "Checking running containers against expected versions...", 2026-02-05 01:38:42.091463 | orchestrator | "", 2026-02-05 01:38:42.091469 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-05 01:38:42.091475 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-05 01:38:42.091481 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091487 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-05 01:38:42.091492 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091497 | orchestrator | "", 2026-02-05 01:38:42.091503 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-05 01:38:42.091508 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-05 01:38:42.091514 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091539 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-05 01:38:42.091545 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091550 | orchestrator | "", 2026-02-05 01:38:42.091555 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-05 01:38:42.091561 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-05 01:38:42.091566 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091571 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-05 01:38:42.091576 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091582 | orchestrator | "", 2026-02-05 01:38:42.091587 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-05 01:38:42.091592 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-05 01:38:42.091598 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091603 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-05 01:38:42.091608 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091616 | orchestrator | "", 2026-02-05 01:38:42.091623 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-05 01:38:42.091633 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-05 01:38:42.091641 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091694 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-05 01:38:42.091700 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091705 | orchestrator | "", 2026-02-05 01:38:42.091710 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-05 01:38:42.091715 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 01:38:42.091720 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091725 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 01:38:42.091731 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091736 | orchestrator | "", 2026-02-05 01:38:42.091741 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-05 01:38:42.091746 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-05 01:38:42.091751 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091756 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-05 01:38:42.091761 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091766 | orchestrator | "", 2026-02-05 01:38:42.091772 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-05 01:38:42.091777 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-05 01:38:42.091782 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091787 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-05 01:38:42.091792 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091797 | orchestrator | "", 2026-02-05 01:38:42.091802 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-05 01:38:42.091807 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-05 01:38:42.091812 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091818 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-05 01:38:42.091823 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091828 | orchestrator | "", 2026-02-05 01:38:42.091833 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-05 01:38:42.091838 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-05 01:38:42.091843 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091848 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-05 01:38:42.091853 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091858 | orchestrator | "", 2026-02-05 01:38:42.091863 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-05 01:38:42.091868 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 01:38:42.091874 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091885 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 01:38:42.091890 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091895 | orchestrator | "", 2026-02-05 01:38:42.091900 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-05 01:38:42.091907 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 01:38:42.091913 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091919 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 01:38:42.091925 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091932 | orchestrator | "", 2026-02-05 01:38:42.091938 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-05 01:38:42.091947 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 01:38:42.091956 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.091965 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 01:38:42.091974 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.091983 | orchestrator | "", 2026-02-05 01:38:42.091991 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-05 01:38:42.092001 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 01:38:42.092009 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.092017 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 01:38:42.092045 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.092055 | orchestrator | "", 2026-02-05 01:38:42.092065 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-05 01:38:42.092073 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 01:38:42.092082 | orchestrator | " Enabled: true", 2026-02-05 01:38:42.092099 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-05 01:38:42.092109 | orchestrator | " Status: ✅ MATCH", 2026-02-05 01:38:42.092119 | orchestrator | "", 2026-02-05 01:38:42.092128 | orchestrator | "=== Summary ===", 2026-02-05 01:38:42.092137 | orchestrator | "Errors (version mismatches): 0", 2026-02-05 01:38:42.092145 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-05 01:38:42.092152 | orchestrator | "", 2026-02-05 01:38:42.092158 | orchestrator | "✅ All running containers match expected versions!" 2026-02-05 01:38:42.092165 | orchestrator | ] 2026-02-05 01:38:42.092171 | orchestrator | } 2026-02-05 01:38:42.092177 | orchestrator | 2026-02-05 01:38:42.092184 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-05 01:38:42.147955 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:38:42.148052 | orchestrator | 2026-02-05 01:38:42.148067 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:38:42.148079 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-05 01:38:42.148088 | orchestrator | 2026-02-05 01:38:42.243238 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 01:38:42.243333 | orchestrator | + deactivate 2026-02-05 01:38:42.243348 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-05 01:38:42.243362 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 01:38:42.243372 | orchestrator | + export PATH 2026-02-05 01:38:42.243382 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-05 01:38:42.243393 | orchestrator | + '[' -n '' ']' 2026-02-05 01:38:42.243403 | orchestrator | + hash -r 2026-02-05 01:38:42.243804 | orchestrator | + '[' -n '' ']' 2026-02-05 01:38:42.243875 | orchestrator | + unset VIRTUAL_ENV 2026-02-05 01:38:42.243884 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-05 01:38:42.243891 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-05 01:38:42.243898 | orchestrator | + unset -f deactivate 2026-02-05 01:38:42.243905 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-05 01:38:42.253229 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-05 01:38:42.253306 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-05 01:38:42.253317 | orchestrator | + local max_attempts=60 2026-02-05 01:38:42.253327 | orchestrator | + local name=ceph-ansible 2026-02-05 01:38:42.253361 | orchestrator | + local attempt_num=1 2026-02-05 01:38:42.254419 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:38:42.287765 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:38:42.287860 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-05 01:38:42.287874 | orchestrator | + local max_attempts=60 2026-02-05 01:38:42.287886 | orchestrator | + local name=kolla-ansible 2026-02-05 01:38:42.287897 | orchestrator | + local attempt_num=1 2026-02-05 01:38:42.288416 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-05 01:38:42.317883 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:38:42.318002 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-05 01:38:42.318083 | orchestrator | + local max_attempts=60 2026-02-05 01:38:42.318092 | orchestrator | + local name=osism-ansible 2026-02-05 01:38:42.318098 | orchestrator | + local attempt_num=1 2026-02-05 01:38:42.318903 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-05 01:38:42.352513 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:38:42.352613 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-05 01:38:42.352628 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-05 01:38:43.029747 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-05 01:38:43.214899 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-05 01:38:43.215021 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-05 01:38:43.215045 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-05 01:38:43.215061 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-02-05 01:38:43.215078 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-02-05 01:38:43.215092 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2026-02-05 01:38:43.215120 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2026-02-05 01:38:43.215129 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 57 seconds (healthy) 2026-02-05 01:38:43.215137 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2026-02-05 01:38:43.215145 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2026-02-05 01:38:43.215153 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2026-02-05 01:38:43.215161 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2026-02-05 01:38:43.215169 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-05 01:38:43.215195 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-02-05 01:38:43.215204 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-05 01:38:43.215212 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2026-02-05 01:38:43.220167 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-05 01:38:43.266083 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 01:38:43.266176 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-05 01:38:43.271441 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-05 01:38:55.503200 | orchestrator | 2026-02-05 01:38:55 | INFO  | Task daa3c975-1a5a-40c8-8832-54ce9eaaeca6 (resolvconf) was prepared for execution. 2026-02-05 01:38:55.503300 | orchestrator | 2026-02-05 01:38:55 | INFO  | It takes a moment until task daa3c975-1a5a-40c8-8832-54ce9eaaeca6 (resolvconf) has been started and output is visible here. 2026-02-05 01:39:09.179325 | orchestrator | 2026-02-05 01:39:09.179487 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-05 01:39:09.179507 | orchestrator | 2026-02-05 01:39:09.179518 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 01:39:09.179529 | orchestrator | Thursday 05 February 2026 01:38:59 +0000 (0:00:00.136) 0:00:00.136 ***** 2026-02-05 01:39:09.179539 | orchestrator | ok: [testbed-manager] 2026-02-05 01:39:09.179550 | orchestrator | 2026-02-05 01:39:09.179560 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-05 01:39:09.179571 | orchestrator | Thursday 05 February 2026 01:39:03 +0000 (0:00:03.680) 0:00:03.817 ***** 2026-02-05 01:39:09.179581 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:39:09.179592 | orchestrator | 2026-02-05 01:39:09.179602 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-05 01:39:09.179612 | orchestrator | Thursday 05 February 2026 01:39:03 +0000 (0:00:00.067) 0:00:03.884 ***** 2026-02-05 01:39:09.179622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-05 01:39:09.179633 | orchestrator | 2026-02-05 01:39:09.179643 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-05 01:39:09.179653 | orchestrator | Thursday 05 February 2026 01:39:03 +0000 (0:00:00.089) 0:00:03.974 ***** 2026-02-05 01:39:09.179663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 01:39:09.179672 | orchestrator | 2026-02-05 01:39:09.179682 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-05 01:39:09.179741 | orchestrator | Thursday 05 February 2026 01:39:03 +0000 (0:00:00.077) 0:00:04.051 ***** 2026-02-05 01:39:09.179754 | orchestrator | ok: [testbed-manager] 2026-02-05 01:39:09.179764 | orchestrator | 2026-02-05 01:39:09.179774 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-05 01:39:09.179783 | orchestrator | Thursday 05 February 2026 01:39:04 +0000 (0:00:01.039) 0:00:05.090 ***** 2026-02-05 01:39:09.179793 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:39:09.179803 | orchestrator | 2026-02-05 01:39:09.179813 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-05 01:39:09.179823 | orchestrator | Thursday 05 February 2026 01:39:04 +0000 (0:00:00.063) 0:00:05.154 ***** 2026-02-05 01:39:09.179832 | orchestrator | ok: [testbed-manager] 2026-02-05 01:39:09.179844 | orchestrator | 2026-02-05 01:39:09.179856 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-05 01:39:09.179888 | orchestrator | Thursday 05 February 2026 01:39:05 +0000 (0:00:00.485) 0:00:05.639 ***** 2026-02-05 01:39:09.179900 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:39:09.179912 | orchestrator | 2026-02-05 01:39:09.179923 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-05 01:39:09.179936 | orchestrator | Thursday 05 February 2026 01:39:05 +0000 (0:00:00.080) 0:00:05.719 ***** 2026-02-05 01:39:09.179948 | orchestrator | changed: [testbed-manager] 2026-02-05 01:39:09.179959 | orchestrator | 2026-02-05 01:39:09.179971 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-05 01:39:09.179982 | orchestrator | Thursday 05 February 2026 01:39:05 +0000 (0:00:00.548) 0:00:06.268 ***** 2026-02-05 01:39:09.179994 | orchestrator | changed: [testbed-manager] 2026-02-05 01:39:09.180006 | orchestrator | 2026-02-05 01:39:09.180017 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-05 01:39:09.180029 | orchestrator | Thursday 05 February 2026 01:39:06 +0000 (0:00:01.093) 0:00:07.362 ***** 2026-02-05 01:39:09.180040 | orchestrator | ok: [testbed-manager] 2026-02-05 01:39:09.180050 | orchestrator | 2026-02-05 01:39:09.180060 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-05 01:39:09.180070 | orchestrator | Thursday 05 February 2026 01:39:07 +0000 (0:00:00.948) 0:00:08.310 ***** 2026-02-05 01:39:09.180080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-05 01:39:09.180090 | orchestrator | 2026-02-05 01:39:09.180099 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-05 01:39:09.180109 | orchestrator | Thursday 05 February 2026 01:39:07 +0000 (0:00:00.088) 0:00:08.398 ***** 2026-02-05 01:39:09.180118 | orchestrator | changed: [testbed-manager] 2026-02-05 01:39:09.180128 | orchestrator | 2026-02-05 01:39:09.180138 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:39:09.180148 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 01:39:09.180158 | orchestrator | 2026-02-05 01:39:09.180168 | orchestrator | 2026-02-05 01:39:09.180177 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:39:09.180187 | orchestrator | Thursday 05 February 2026 01:39:08 +0000 (0:00:01.153) 0:00:09.551 ***** 2026-02-05 01:39:09.180196 | orchestrator | =============================================================================== 2026-02-05 01:39:09.180206 | orchestrator | Gathering Facts --------------------------------------------------------- 3.68s 2026-02-05 01:39:09.180215 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.15s 2026-02-05 01:39:09.180225 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2026-02-05 01:39:09.180234 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.04s 2026-02-05 01:39:09.180244 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2026-02-05 01:39:09.180254 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2026-02-05 01:39:09.180281 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2026-02-05 01:39:09.180292 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-02-05 01:39:09.180301 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-02-05 01:39:09.180311 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-05 01:39:09.180321 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-02-05 01:39:09.180330 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-05 01:39:09.180340 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-02-05 01:39:09.452041 | orchestrator | + osism apply sshconfig 2026-02-05 01:39:21.556649 | orchestrator | 2026-02-05 01:39:21 | INFO  | Task aaa632f9-6a95-41b2-8e08-4484267efbab (sshconfig) was prepared for execution. 2026-02-05 01:39:21.556809 | orchestrator | 2026-02-05 01:39:21 | INFO  | It takes a moment until task aaa632f9-6a95-41b2-8e08-4484267efbab (sshconfig) has been started and output is visible here. 2026-02-05 01:39:33.329035 | orchestrator | 2026-02-05 01:39:33.329145 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-05 01:39:33.329162 | orchestrator | 2026-02-05 01:39:33.329173 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-05 01:39:33.329185 | orchestrator | Thursday 05 February 2026 01:39:25 +0000 (0:00:00.155) 0:00:00.155 ***** 2026-02-05 01:39:33.329196 | orchestrator | ok: [testbed-manager] 2026-02-05 01:39:33.329208 | orchestrator | 2026-02-05 01:39:33.329240 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-05 01:39:33.329251 | orchestrator | Thursday 05 February 2026 01:39:26 +0000 (0:00:00.525) 0:00:00.681 ***** 2026-02-05 01:39:33.329263 | orchestrator | changed: [testbed-manager] 2026-02-05 01:39:33.329275 | orchestrator | 2026-02-05 01:39:33.329286 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-05 01:39:33.329297 | orchestrator | Thursday 05 February 2026 01:39:26 +0000 (0:00:00.509) 0:00:01.190 ***** 2026-02-05 01:39:33.329308 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-05 01:39:33.329319 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-05 01:39:33.329330 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-05 01:39:33.329341 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-05 01:39:33.329352 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-05 01:39:33.329363 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-05 01:39:33.329373 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-05 01:39:33.329384 | orchestrator | 2026-02-05 01:39:33.329395 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-05 01:39:33.329406 | orchestrator | Thursday 05 February 2026 01:39:32 +0000 (0:00:05.593) 0:00:06.783 ***** 2026-02-05 01:39:33.329417 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:39:33.329428 | orchestrator | 2026-02-05 01:39:33.329439 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-05 01:39:33.329450 | orchestrator | Thursday 05 February 2026 01:39:32 +0000 (0:00:00.080) 0:00:06.864 ***** 2026-02-05 01:39:33.329461 | orchestrator | changed: [testbed-manager] 2026-02-05 01:39:33.329472 | orchestrator | 2026-02-05 01:39:33.329482 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:39:33.329494 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:39:33.329506 | orchestrator | 2026-02-05 01:39:33.329517 | orchestrator | 2026-02-05 01:39:33.329528 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:39:33.329539 | orchestrator | Thursday 05 February 2026 01:39:33 +0000 (0:00:00.590) 0:00:07.454 ***** 2026-02-05 01:39:33.329550 | orchestrator | =============================================================================== 2026-02-05 01:39:33.329561 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.59s 2026-02-05 01:39:33.329572 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2026-02-05 01:39:33.329582 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.53s 2026-02-05 01:39:33.329593 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2026-02-05 01:39:33.329604 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-02-05 01:39:33.590676 | orchestrator | + osism apply known-hosts 2026-02-05 01:39:45.670844 | orchestrator | 2026-02-05 01:39:45 | INFO  | Task 9a022b02-4c36-43f4-bd2e-988b75e941c4 (known-hosts) was prepared for execution. 2026-02-05 01:39:45.670940 | orchestrator | 2026-02-05 01:39:45 | INFO  | It takes a moment until task 9a022b02-4c36-43f4-bd2e-988b75e941c4 (known-hosts) has been started and output is visible here. 2026-02-05 01:40:02.198498 | orchestrator | 2026-02-05 01:40:02.198713 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-05 01:40:02.198734 | orchestrator | 2026-02-05 01:40:02.198744 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-05 01:40:02.198755 | orchestrator | Thursday 05 February 2026 01:39:49 +0000 (0:00:00.160) 0:00:00.160 ***** 2026-02-05 01:40:02.198765 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-05 01:40:02.198774 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-05 01:40:02.198807 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-05 01:40:02.198817 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-05 01:40:02.198869 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-05 01:40:02.198880 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-05 01:40:02.198890 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-05 01:40:02.198900 | orchestrator | 2026-02-05 01:40:02.198910 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-05 01:40:02.198920 | orchestrator | Thursday 05 February 2026 01:39:55 +0000 (0:00:05.733) 0:00:05.894 ***** 2026-02-05 01:40:02.198931 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-05 01:40:02.198942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-05 01:40:02.198953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-05 01:40:02.198968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-05 01:40:02.198983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-05 01:40:02.198997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-05 01:40:02.199021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-05 01:40:02.199034 | orchestrator | 2026-02-05 01:40:02.199049 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:02.199062 | orchestrator | Thursday 05 February 2026 01:39:55 +0000 (0:00:00.161) 0:00:06.055 ***** 2026-02-05 01:40:02.199080 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ1HKxz8ZBGvgli1ZMMuwz98dKwC2Hcj627rw3+toouEW8P78nlxVevxke92TD2CDyWkkW4zOJMoSV0B8fuvylE=) 2026-02-05 01:40:02.199105 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC98Fe+5wkKZXQ3/69iZhQqlgcz51Ehkx34bvj05nj0H4NkxMFBH3h8p21RCZ1pOmnAKxpUAB/0IcEVu4lT9WW2jPI4580AMSISgWBL3Q5PV9RTtBF4ZqCsQ4s82UjGbfYtFK/XDRb9gJgixE7wEiiMGWhcYnJlL+nj1jHqzZ00SNS5B/vggIESUpoYYLd+O4YyfRSdBOxIJfRcuOtGB8oVUIXqGcCjk3Q0z0jlz7ocu2UVymi5azzGe+Q3EG+3lrbr4L4UKVB5dS4rVvr4zIQY0/gnH0h6Z8qMzjxWdApzzacrKfdIP80PDzyWHL5BoKmKqzGK25LldUT+HrCEnDZC+M7/fkaRi49WFt+0gC5zKVYAwaCJ5DHCSbN46BuZxKlFSdBxWyyi/OWdnqjtTNJlKRnGURPzLCH0rVFNDsFANSg/x0hgnuX8v8YHNHBC1QGJ2JJ61XNiccL8K0Mq3KaQg5yH3nam/QJzJv40tSA51om10teYVeKsHwgdnNjSWM0=) 2026-02-05 01:40:02.199148 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYGhAf6FDng6tIT7/+cgpUloz2blqGOjS7GDKNTms25) 2026-02-05 01:40:02.199167 | orchestrator | 2026-02-05 01:40:02.199189 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:02.199203 | orchestrator | Thursday 05 February 2026 01:39:56 +0000 (0:00:01.190) 0:00:07.246 ***** 2026-02-05 01:40:02.199216 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMMtcvfhErCkcXRVNUX2EG5YvteitjK9FULZ1V5ko55f79QPhIhUEOxgv/heqzz+TdXErdzLtJS9j7BCx87cLNw=) 2026-02-05 01:40:02.199262 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLQOHfE5rlkzoV5VjVPFRfpGKpusCmfHHtLfc8vla3FB3stSnLDiE90Xnl5Sb64Uv4RmOUKiSDKwvoMpHKz0ihE7PcvF+eZEFUH4FfVfGkQp6rOjMkSWXjUA0J/brSH7mO6h9s7/SsssSfZrC8LjJ4OafRZlVFig7U0iba9qYv5Ng90SDxU+PmONZdlGb2x6Ez+YyPWzVU1ZN193yzPdaHy6/ZNaQDmCqrTE3txMNf5uFQ1oZ1NdEXvRBOkk3oT8BSrLUzpza6WjbKdkC59FNbLWSkV3lqS5J8wV3QQLv95qoGzK/yTFL1xqi5ul+BThm4JFuBiw/icdPjytNilLm0e7M4wToWyeJJhsnPbAGz2tc0EcnOA1+mFxmDiku3ecd+9jbTeObbkT33YQUFkwZx/4PVonqPQEXIDIWUNUDmnLW9Y8u6RKme8jY+3Y1B3v2JQpG/Wok3be4HykBSTiXg3tWq4Ilnibbx2Zb2AMN4IFf8ZXVoIXpKg/Hh9hkmdJM=) 2026-02-05 01:40:02.199281 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKIJacuAmm391DtgRzq/Rx9yFoYoBSxXAQP97NOBr1P4) 2026-02-05 01:40:02.199297 | orchestrator | 2026-02-05 01:40:02.199312 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:02.199327 | orchestrator | Thursday 05 February 2026 01:39:57 +0000 (0:00:01.077) 0:00:08.323 ***** 2026-02-05 01:40:02.199344 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFAEfAfazdU7kq5LLigEZcD8RQxqHsWUBeAsYIv2XvURdbLStbSyuD44X2WhE7znPdPHSWyUx9SMa1WEbMhCqhiNZjyRYbAoS1Ti9tvOO88QgiCQFWRr22tdwo4kTJW6qBOjMWewJQ7pdDi2/f3FApul44zvNpYNnij24lnAm6JNiQJfDsjj6s8PzZcrrrArvrguuxqHKDQffnUzijZeMoIjeu5lG2rnW3GjbZWn6xP83kRO8tmA47cohXmbdb2C/h8Maw1k2LhFktyCKX48Ia28Hi8mNFgOjihC/AxW9y3yRQwkyO/7s89zf1/4AGHvTutdv25fOOuvmb27Fw4GAdifYVWo/6mCOAalS4jP0WP3H5vZLrOYDid2FMNqYCu2wYnfhCgM5U+FK0GAc2LA2pmzC1ivTN4P6XCkCaTJpc0kwWqPrVedYodooxU0rE57trXsx/ZOq0dNV8etyqUmzVzl5lQNj4/ztpaZ5etTfh6zN9hduOBM+ucEHey5kF4ts=) 2026-02-05 01:40:02.199360 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdZ4LT4YVWANiPiXbwK0/kWa0uvMuMynLflddIcY2GAhFmKWL0gICnYanIbkq8iuRM0ZEXqbnjGJtqXEJCq31w=) 2026-02-05 01:40:02.199374 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHzXAR+vNQlOLf7iiqYNW2PuLjZ+GMRSGVpauPq1GSVN) 2026-02-05 01:40:02.199385 | orchestrator | 2026-02-05 01:40:02.199395 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:02.199406 | orchestrator | Thursday 05 February 2026 01:39:58 +0000 (0:00:01.046) 0:00:09.370 ***** 2026-02-05 01:40:02.199416 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN9l9cCbQM+R7h+bvloyt14vbxsVHgiyWXneFJ0fQrvdB8h8aTrtrxFlzQl2J7tInl8TLGJ1nafXSq01ttAWzTA=) 2026-02-05 01:40:02.199427 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFJI11PWzee/n5QD446H9NkS55zRPx/H37+1uRFxdscB) 2026-02-05 01:40:02.199438 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDP3uwT8LY/LJI/qqREYk2sr84uqiXt6l9uC7JQpgNGvDGdNETf+mkoJLxWq9dUjDHacmG4b95u0WdReZkTd/cXnynxugaocrWdhuyoi3gl1HN6gq0ErGN8aH+Fh3l72jCu4o0mMBshOycZVM2PEBSidkjbKeZOnAFi1p/tawIsbc88JUVzbD3fDkU8/YFx3bQZILvzZXoR367KpJ1yxrypZrh/wffIog5e6DNidzgCT857g9Q76dQcq6IQPPw/ijkSS+dMYP3AMkIcCzL7B3KhrvwH9tgx6g++rWdRkLS6qB67P6EN7DiH/06SjLRrokqGGi0DKzMcTakoyOX88N2cGb/Z1qHF2G5VyUlQZbzX/uee5kYArI66S5GSofX9WyiqYxMu30rKSG0LZ3vcCEkwhvPYYnGl3TAdb0XAzwYSJFhfvWNtXDT4LHHcn+URKU5WXzYhfmzpHgteMjD7lypRQx6EuitKV0lYnvl6cxkPLzUGB6/HF/ylT/eVoZyCY+s=) 2026-02-05 01:40:02.199458 | orchestrator | 2026-02-05 01:40:02.199467 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:02.199476 | orchestrator | Thursday 05 February 2026 01:40:00 +0000 (0:00:01.065) 0:00:10.435 ***** 2026-02-05 01:40:02.199559 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZVQP1/SCZT6YZFwEkZOjnHAFdxgUtoNmrvihNdQZzKeyVatwpBmqNmHOlp+xrfw47KLOwqdSLR2E8UG1T+YfYFnqyfYT+WGq+mjvxLrtHLd9Y5jo6uGd+HHybs0UY/+nsKyZ9HhhMOwA5t06IEvadFjxi3i369dzQRkbj6fwC8i2hiQOpxKNwsYaoG8xWo1r5lpCjE0VvvcVjk3SdEbJlquKrwN+Zzy4TtF55qBtP8jyukb+hUUeQ8zInV4W53G4P0izQx6rSOcya6F8WjfsqDwji7633Q7O88d9AHUTozbKUTxKP7A+iNQK2QOR7yk0WLZK7+JPonuTrEmBOOemsumoMMOeRzGny5IbgZWECDHvbEUM/eVIgTNeoJjEkcp/qtca7dxQ2qJYUsnOaYDpkl4q0ZR6HJICl+R5WO4IPcxh+I/GPKzFuO4DAENy46IdYLbflTjnt0BCTAvf1T7RWVNHmG0e1/khzWxG2obcC349OKz9SlQmi1SZeAzXc3Ws=) 2026-02-05 01:40:02.199576 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE9wxPYOAsU4XhpXfuOxwLJ3Rodh1R42/wZjdSGyCXelY2fEOF+7tbrsiMeg3MRzEYdIq0o3Fz0U2Yl8KNm9SuU=) 2026-02-05 01:40:02.199592 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBjIcVGUbnv5hQiK3y2F2xs2+hVfcxlCZWjJDVO/71eh) 2026-02-05 01:40:02.199608 | orchestrator | 2026-02-05 01:40:02.199623 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:02.199638 | orchestrator | Thursday 05 February 2026 01:40:01 +0000 (0:00:01.142) 0:00:11.577 ***** 2026-02-05 01:40:02.199663 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIx9m3/pNnR9DhSP3OAL2I8zz/qbqQp8DU+pLSBRgmfOd8DyyLj8xWTAadSi2bg7bnppjoIaqSMEerTf4TqDWEKhRAZj+99ZPvmkVzIHl0j/V0oAioB/lDQFMmbp3GBJEqU4E+D7+WmoMvKXSUEAyWRaIFfQIZUPeY/waVC17NY839pXM5L6rBOw4CfnQqocgBgR4ZPNaWWh3D73BMKalBPRoOweIe0rNQJsjm4f9aKeP7L7/5MXj/AhRQpgs0GeRZei4YBNxL44cUSGE+GeRAnOoKHFgr/545wva8WYgw+3LRjnazUKQN1OstzmmR+PbOJbotCfjzM3jlEgjPXpgG9Y0jPDqb7wBp6uXU1e2bHksM7Nb2PozZxb9dkPxqoCLZiH69nWFawnsN8xn7bEZ6aAwOKRxbt3t+i4THH+JDJpWxlSJRTUyQRMprZZp9bppjdqwvWgBSaf9w1fWuNBDCA38PO2NqCV1YbwaNLfsvq86Kgt6Q1bJF5C/5cjrW5M0=) 2026-02-05 01:40:12.793012 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAwKEppWTrx/R8L/P7cq9FQkWfN/FECX50Go674dFmOp3OQ70OecFVfNs3U8/INtqosjt9jF8V/ANEggbU8Ox+U=) 2026-02-05 01:40:12.793123 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOmZdc/B4h9vpHkEAiFfI3cSxubu8O7q5DJ1YlcyL2sL) 2026-02-05 01:40:12.793140 | orchestrator | 2026-02-05 01:40:12.793152 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:12.793163 | orchestrator | Thursday 05 February 2026 01:40:02 +0000 (0:00:01.032) 0:00:12.610 ***** 2026-02-05 01:40:12.793174 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP+UgVC6ufg5xvHaHV8T/pMI9EJMbTgz7zlV++dtirs3) 2026-02-05 01:40:12.793186 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZ30x3W3ySkEpWkD1hFko+oCksw9DUjKjEQBVtZJQfGZTdSKc9pNv7cTAFG+ODWbNl2yf35+ijFou01TeKIk0U9T+2PA7fc9DyuHhAySRnk0p7lpH6AEmWnCKysfB5Ih/TC16uaa99x1aejVUzyJiJuuJLy/BuoBzCT1qH1xHGSER9Q59LrtgojStr9tGzJ3P8KMU+KILgkahK4CsAJCb0VmRUwLVx8ysoB84JwAHUAc6bpE8EISJ0/nU1FpjRgrxXeuzwIXnPBTRpAj9e98mCiQsbXtHwaUDz8D4NVtWvMIzOl8X6hBFJysgI0Le00uLJE+qRMS2MhTaifXn4IOoXfk4+j4cwqoBDkdzOihdz74jA00CmrhkSk0/ZVhQ87ynAxbXhx4RPL+QhWept+PT2sh/w6oCAw6x2NpaAilkJMDGClbx4kcQ6gQPX+Qggbnwl7tqsBOtpRCOGZ+vPD0ZZ2u4wFns/xj+fbghRtZQGfHpMuVWwMQhvrYK5Q6JcJIE=) 2026-02-05 01:40:12.793199 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO43jt4IjfBiLz64g26w8LFPpMHr24I6+hgfCJ4scp4uZDIFLZtLCAtWi2GNvAFVlfJ9B9vEUIvLYSppEseOxE0=) 2026-02-05 01:40:12.793229 | orchestrator | 2026-02-05 01:40:12.793239 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-05 01:40:12.793250 | orchestrator | Thursday 05 February 2026 01:40:03 +0000 (0:00:01.021) 0:00:13.631 ***** 2026-02-05 01:40:12.793261 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-05 01:40:12.793271 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-05 01:40:12.793281 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-05 01:40:12.793291 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-05 01:40:12.793301 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-05 01:40:12.793311 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-05 01:40:12.793321 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-05 01:40:12.793330 | orchestrator | 2026-02-05 01:40:12.793341 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-05 01:40:12.793352 | orchestrator | Thursday 05 February 2026 01:40:08 +0000 (0:00:05.150) 0:00:18.782 ***** 2026-02-05 01:40:12.793362 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-05 01:40:12.793374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-05 01:40:12.793384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-05 01:40:12.793394 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-05 01:40:12.793404 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-05 01:40:12.793414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-05 01:40:12.793423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-05 01:40:12.793433 | orchestrator | 2026-02-05 01:40:12.793443 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:12.793453 | orchestrator | Thursday 05 February 2026 01:40:08 +0000 (0:00:00.189) 0:00:18.972 ***** 2026-02-05 01:40:12.793463 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYGhAf6FDng6tIT7/+cgpUloz2blqGOjS7GDKNTms25) 2026-02-05 01:40:12.793500 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC98Fe+5wkKZXQ3/69iZhQqlgcz51Ehkx34bvj05nj0H4NkxMFBH3h8p21RCZ1pOmnAKxpUAB/0IcEVu4lT9WW2jPI4580AMSISgWBL3Q5PV9RTtBF4ZqCsQ4s82UjGbfYtFK/XDRb9gJgixE7wEiiMGWhcYnJlL+nj1jHqzZ00SNS5B/vggIESUpoYYLd+O4YyfRSdBOxIJfRcuOtGB8oVUIXqGcCjk3Q0z0jlz7ocu2UVymi5azzGe+Q3EG+3lrbr4L4UKVB5dS4rVvr4zIQY0/gnH0h6Z8qMzjxWdApzzacrKfdIP80PDzyWHL5BoKmKqzGK25LldUT+HrCEnDZC+M7/fkaRi49WFt+0gC5zKVYAwaCJ5DHCSbN46BuZxKlFSdBxWyyi/OWdnqjtTNJlKRnGURPzLCH0rVFNDsFANSg/x0hgnuX8v8YHNHBC1QGJ2JJ61XNiccL8K0Mq3KaQg5yH3nam/QJzJv40tSA51om10teYVeKsHwgdnNjSWM0=) 2026-02-05 01:40:12.793519 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ1HKxz8ZBGvgli1ZMMuwz98dKwC2Hcj627rw3+toouEW8P78nlxVevxke92TD2CDyWkkW4zOJMoSV0B8fuvylE=) 2026-02-05 01:40:12.793537 | orchestrator | 2026-02-05 01:40:12.793548 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:12.793561 | orchestrator | Thursday 05 February 2026 01:40:09 +0000 (0:00:01.047) 0:00:20.020 ***** 2026-02-05 01:40:12.793576 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMMtcvfhErCkcXRVNUX2EG5YvteitjK9FULZ1V5ko55f79QPhIhUEOxgv/heqzz+TdXErdzLtJS9j7BCx87cLNw=) 2026-02-05 01:40:12.793589 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLQOHfE5rlkzoV5VjVPFRfpGKpusCmfHHtLfc8vla3FB3stSnLDiE90Xnl5Sb64Uv4RmOUKiSDKwvoMpHKz0ihE7PcvF+eZEFUH4FfVfGkQp6rOjMkSWXjUA0J/brSH7mO6h9s7/SsssSfZrC8LjJ4OafRZlVFig7U0iba9qYv5Ng90SDxU+PmONZdlGb2x6Ez+YyPWzVU1ZN193yzPdaHy6/ZNaQDmCqrTE3txMNf5uFQ1oZ1NdEXvRBOkk3oT8BSrLUzpza6WjbKdkC59FNbLWSkV3lqS5J8wV3QQLv95qoGzK/yTFL1xqi5ul+BThm4JFuBiw/icdPjytNilLm0e7M4wToWyeJJhsnPbAGz2tc0EcnOA1+mFxmDiku3ecd+9jbTeObbkT33YQUFkwZx/4PVonqPQEXIDIWUNUDmnLW9Y8u6RKme8jY+3Y1B3v2JQpG/Wok3be4HykBSTiXg3tWq4Ilnibbx2Zb2AMN4IFf8ZXVoIXpKg/Hh9hkmdJM=) 2026-02-05 01:40:12.793601 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKIJacuAmm391DtgRzq/Rx9yFoYoBSxXAQP97NOBr1P4) 2026-02-05 01:40:12.793613 | orchestrator | 2026-02-05 01:40:12.793625 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:12.793637 | orchestrator | Thursday 05 February 2026 01:40:10 +0000 (0:00:01.057) 0:00:21.077 ***** 2026-02-05 01:40:12.793648 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdZ4LT4YVWANiPiXbwK0/kWa0uvMuMynLflddIcY2GAhFmKWL0gICnYanIbkq8iuRM0ZEXqbnjGJtqXEJCq31w=) 2026-02-05 01:40:12.793661 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFAEfAfazdU7kq5LLigEZcD8RQxqHsWUBeAsYIv2XvURdbLStbSyuD44X2WhE7znPdPHSWyUx9SMa1WEbMhCqhiNZjyRYbAoS1Ti9tvOO88QgiCQFWRr22tdwo4kTJW6qBOjMWewJQ7pdDi2/f3FApul44zvNpYNnij24lnAm6JNiQJfDsjj6s8PzZcrrrArvrguuxqHKDQffnUzijZeMoIjeu5lG2rnW3GjbZWn6xP83kRO8tmA47cohXmbdb2C/h8Maw1k2LhFktyCKX48Ia28Hi8mNFgOjihC/AxW9y3yRQwkyO/7s89zf1/4AGHvTutdv25fOOuvmb27Fw4GAdifYVWo/6mCOAalS4jP0WP3H5vZLrOYDid2FMNqYCu2wYnfhCgM5U+FK0GAc2LA2pmzC1ivTN4P6XCkCaTJpc0kwWqPrVedYodooxU0rE57trXsx/ZOq0dNV8etyqUmzVzl5lQNj4/ztpaZ5etTfh6zN9hduOBM+ucEHey5kF4ts=) 2026-02-05 01:40:12.793673 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHzXAR+vNQlOLf7iiqYNW2PuLjZ+GMRSGVpauPq1GSVN) 2026-02-05 01:40:12.793684 | orchestrator | 2026-02-05 01:40:12.793695 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:12.793707 | orchestrator | Thursday 05 February 2026 01:40:11 +0000 (0:00:01.042) 0:00:22.119 ***** 2026-02-05 01:40:12.793718 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN9l9cCbQM+R7h+bvloyt14vbxsVHgiyWXneFJ0fQrvdB8h8aTrtrxFlzQl2J7tInl8TLGJ1nafXSq01ttAWzTA=) 2026-02-05 01:40:12.793730 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDP3uwT8LY/LJI/qqREYk2sr84uqiXt6l9uC7JQpgNGvDGdNETf+mkoJLxWq9dUjDHacmG4b95u0WdReZkTd/cXnynxugaocrWdhuyoi3gl1HN6gq0ErGN8aH+Fh3l72jCu4o0mMBshOycZVM2PEBSidkjbKeZOnAFi1p/tawIsbc88JUVzbD3fDkU8/YFx3bQZILvzZXoR367KpJ1yxrypZrh/wffIog5e6DNidzgCT857g9Q76dQcq6IQPPw/ijkSS+dMYP3AMkIcCzL7B3KhrvwH9tgx6g++rWdRkLS6qB67P6EN7DiH/06SjLRrokqGGi0DKzMcTakoyOX88N2cGb/Z1qHF2G5VyUlQZbzX/uee5kYArI66S5GSofX9WyiqYxMu30rKSG0LZ3vcCEkwhvPYYnGl3TAdb0XAzwYSJFhfvWNtXDT4LHHcn+URKU5WXzYhfmzpHgteMjD7lypRQx6EuitKV0lYnvl6cxkPLzUGB6/HF/ylT/eVoZyCY+s=) 2026-02-05 01:40:12.793752 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFJI11PWzee/n5QD446H9NkS55zRPx/H37+1uRFxdscB) 2026-02-05 01:40:17.249181 | orchestrator | 2026-02-05 01:40:17.249310 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:17.249352 | orchestrator | Thursday 05 February 2026 01:40:12 +0000 (0:00:01.083) 0:00:23.203 ***** 2026-02-05 01:40:17.249388 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZVQP1/SCZT6YZFwEkZOjnHAFdxgUtoNmrvihNdQZzKeyVatwpBmqNmHOlp+xrfw47KLOwqdSLR2E8UG1T+YfYFnqyfYT+WGq+mjvxLrtHLd9Y5jo6uGd+HHybs0UY/+nsKyZ9HhhMOwA5t06IEvadFjxi3i369dzQRkbj6fwC8i2hiQOpxKNwsYaoG8xWo1r5lpCjE0VvvcVjk3SdEbJlquKrwN+Zzy4TtF55qBtP8jyukb+hUUeQ8zInV4W53G4P0izQx6rSOcya6F8WjfsqDwji7633Q7O88d9AHUTozbKUTxKP7A+iNQK2QOR7yk0WLZK7+JPonuTrEmBOOemsumoMMOeRzGny5IbgZWECDHvbEUM/eVIgTNeoJjEkcp/qtca7dxQ2qJYUsnOaYDpkl4q0ZR6HJICl+R5WO4IPcxh+I/GPKzFuO4DAENy46IdYLbflTjnt0BCTAvf1T7RWVNHmG0e1/khzWxG2obcC349OKz9SlQmi1SZeAzXc3Ws=) 2026-02-05 01:40:17.249415 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE9wxPYOAsU4XhpXfuOxwLJ3Rodh1R42/wZjdSGyCXelY2fEOF+7tbrsiMeg3MRzEYdIq0o3Fz0U2Yl8KNm9SuU=) 2026-02-05 01:40:17.249436 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBjIcVGUbnv5hQiK3y2F2xs2+hVfcxlCZWjJDVO/71eh) 2026-02-05 01:40:17.249456 | orchestrator | 2026-02-05 01:40:17.249473 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:17.249489 | orchestrator | Thursday 05 February 2026 01:40:13 +0000 (0:00:01.023) 0:00:24.226 ***** 2026-02-05 01:40:17.249506 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAwKEppWTrx/R8L/P7cq9FQkWfN/FECX50Go674dFmOp3OQ70OecFVfNs3U8/INtqosjt9jF8V/ANEggbU8Ox+U=) 2026-02-05 01:40:17.249527 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIx9m3/pNnR9DhSP3OAL2I8zz/qbqQp8DU+pLSBRgmfOd8DyyLj8xWTAadSi2bg7bnppjoIaqSMEerTf4TqDWEKhRAZj+99ZPvmkVzIHl0j/V0oAioB/lDQFMmbp3GBJEqU4E+D7+WmoMvKXSUEAyWRaIFfQIZUPeY/waVC17NY839pXM5L6rBOw4CfnQqocgBgR4ZPNaWWh3D73BMKalBPRoOweIe0rNQJsjm4f9aKeP7L7/5MXj/AhRQpgs0GeRZei4YBNxL44cUSGE+GeRAnOoKHFgr/545wva8WYgw+3LRjnazUKQN1OstzmmR+PbOJbotCfjzM3jlEgjPXpgG9Y0jPDqb7wBp6uXU1e2bHksM7Nb2PozZxb9dkPxqoCLZiH69nWFawnsN8xn7bEZ6aAwOKRxbt3t+i4THH+JDJpWxlSJRTUyQRMprZZp9bppjdqwvWgBSaf9w1fWuNBDCA38PO2NqCV1YbwaNLfsvq86Kgt6Q1bJF5C/5cjrW5M0=) 2026-02-05 01:40:17.249546 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOmZdc/B4h9vpHkEAiFfI3cSxubu8O7q5DJ1YlcyL2sL) 2026-02-05 01:40:17.249565 | orchestrator | 2026-02-05 01:40:17.249585 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 01:40:17.249603 | orchestrator | Thursday 05 February 2026 01:40:14 +0000 (0:00:01.091) 0:00:25.317 ***** 2026-02-05 01:40:17.249647 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZ30x3W3ySkEpWkD1hFko+oCksw9DUjKjEQBVtZJQfGZTdSKc9pNv7cTAFG+ODWbNl2yf35+ijFou01TeKIk0U9T+2PA7fc9DyuHhAySRnk0p7lpH6AEmWnCKysfB5Ih/TC16uaa99x1aejVUzyJiJuuJLy/BuoBzCT1qH1xHGSER9Q59LrtgojStr9tGzJ3P8KMU+KILgkahK4CsAJCb0VmRUwLVx8ysoB84JwAHUAc6bpE8EISJ0/nU1FpjRgrxXeuzwIXnPBTRpAj9e98mCiQsbXtHwaUDz8D4NVtWvMIzOl8X6hBFJysgI0Le00uLJE+qRMS2MhTaifXn4IOoXfk4+j4cwqoBDkdzOihdz74jA00CmrhkSk0/ZVhQ87ynAxbXhx4RPL+QhWept+PT2sh/w6oCAw6x2NpaAilkJMDGClbx4kcQ6gQPX+Qggbnwl7tqsBOtpRCOGZ+vPD0ZZ2u4wFns/xj+fbghRtZQGfHpMuVWwMQhvrYK5Q6JcJIE=) 2026-02-05 01:40:17.249662 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO43jt4IjfBiLz64g26w8LFPpMHr24I6+hgfCJ4scp4uZDIFLZtLCAtWi2GNvAFVlfJ9B9vEUIvLYSppEseOxE0=) 2026-02-05 01:40:17.249674 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP+UgVC6ufg5xvHaHV8T/pMI9EJMbTgz7zlV++dtirs3) 2026-02-05 01:40:17.249685 | orchestrator | 2026-02-05 01:40:17.249696 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-05 01:40:17.249707 | orchestrator | Thursday 05 February 2026 01:40:16 +0000 (0:00:01.107) 0:00:26.424 ***** 2026-02-05 01:40:17.249731 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-05 01:40:17.249745 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-05 01:40:17.249758 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-05 01:40:17.249770 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-05 01:40:17.249783 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-05 01:40:17.249795 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-05 01:40:17.249846 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-05 01:40:17.249860 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:40:17.249874 | orchestrator | 2026-02-05 01:40:17.249906 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-05 01:40:17.249920 | orchestrator | Thursday 05 February 2026 01:40:16 +0000 (0:00:00.223) 0:00:26.648 ***** 2026-02-05 01:40:17.249933 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:40:17.249945 | orchestrator | 2026-02-05 01:40:17.249958 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-05 01:40:17.249971 | orchestrator | Thursday 05 February 2026 01:40:16 +0000 (0:00:00.054) 0:00:26.703 ***** 2026-02-05 01:40:17.249984 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:40:17.249996 | orchestrator | 2026-02-05 01:40:17.250009 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-05 01:40:17.250085 | orchestrator | Thursday 05 February 2026 01:40:16 +0000 (0:00:00.056) 0:00:26.760 ***** 2026-02-05 01:40:17.250099 | orchestrator | changed: [testbed-manager] 2026-02-05 01:40:17.250112 | orchestrator | 2026-02-05 01:40:17.250123 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:40:17.250134 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 01:40:17.250146 | orchestrator | 2026-02-05 01:40:17.250158 | orchestrator | 2026-02-05 01:40:17.250168 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:40:17.250186 | orchestrator | Thursday 05 February 2026 01:40:17 +0000 (0:00:00.707) 0:00:27.468 ***** 2026-02-05 01:40:17.250197 | orchestrator | =============================================================================== 2026-02-05 01:40:17.250208 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.73s 2026-02-05 01:40:17.250219 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.15s 2026-02-05 01:40:17.250231 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-02-05 01:40:17.250242 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-05 01:40:17.250253 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-05 01:40:17.250263 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-05 01:40:17.250274 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-02-05 01:40:17.250285 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-02-05 01:40:17.250296 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-05 01:40:17.250307 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-02-05 01:40:17.250325 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-02-05 01:40:17.250344 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-02-05 01:40:17.250361 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-05 01:40:17.250378 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-05 01:40:17.250395 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-05 01:40:17.250422 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-05 01:40:17.250440 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.71s 2026-02-05 01:40:17.250459 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.22s 2026-02-05 01:40:17.250476 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-02-05 01:40:17.250495 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-02-05 01:40:17.564495 | orchestrator | + osism apply squid 2026-02-05 01:40:29.527517 | orchestrator | 2026-02-05 01:40:29 | INFO  | Task 8a40527b-baa3-4829-9d0b-2c2552443665 (squid) was prepared for execution. 2026-02-05 01:40:29.527607 | orchestrator | 2026-02-05 01:40:29 | INFO  | It takes a moment until task 8a40527b-baa3-4829-9d0b-2c2552443665 (squid) has been started and output is visible here. 2026-02-05 01:42:28.211229 | orchestrator | 2026-02-05 01:42:28.211326 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-05 01:42:28.211338 | orchestrator | 2026-02-05 01:42:28.211345 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-05 01:42:28.211353 | orchestrator | Thursday 05 February 2026 01:40:33 +0000 (0:00:00.119) 0:00:00.119 ***** 2026-02-05 01:42:28.211361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 01:42:28.211370 | orchestrator | 2026-02-05 01:42:28.211377 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-05 01:42:28.211384 | orchestrator | Thursday 05 February 2026 01:40:33 +0000 (0:00:00.070) 0:00:00.189 ***** 2026-02-05 01:42:28.211390 | orchestrator | ok: [testbed-manager] 2026-02-05 01:42:28.211398 | orchestrator | 2026-02-05 01:42:28.211405 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-05 01:42:28.211412 | orchestrator | Thursday 05 February 2026 01:40:34 +0000 (0:00:01.105) 0:00:01.295 ***** 2026-02-05 01:42:28.211419 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-05 01:42:28.211426 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-05 01:42:28.211433 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-05 01:42:28.211440 | orchestrator | 2026-02-05 01:42:28.211446 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-05 01:42:28.211453 | orchestrator | Thursday 05 February 2026 01:40:35 +0000 (0:00:01.057) 0:00:02.352 ***** 2026-02-05 01:42:28.211460 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-05 01:42:28.211467 | orchestrator | 2026-02-05 01:42:28.211474 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-05 01:42:28.211480 | orchestrator | Thursday 05 February 2026 01:40:36 +0000 (0:00:00.955) 0:00:03.308 ***** 2026-02-05 01:42:28.211487 | orchestrator | ok: [testbed-manager] 2026-02-05 01:42:28.211494 | orchestrator | 2026-02-05 01:42:28.211501 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-05 01:42:28.211507 | orchestrator | Thursday 05 February 2026 01:40:36 +0000 (0:00:00.325) 0:00:03.634 ***** 2026-02-05 01:42:28.211514 | orchestrator | changed: [testbed-manager] 2026-02-05 01:42:28.211521 | orchestrator | 2026-02-05 01:42:28.211529 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-05 01:42:28.211535 | orchestrator | Thursday 05 February 2026 01:40:37 +0000 (0:00:00.799) 0:00:04.433 ***** 2026-02-05 01:42:28.211542 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-05 01:42:28.211550 | orchestrator | ok: [testbed-manager] 2026-02-05 01:42:28.211559 | orchestrator | 2026-02-05 01:42:28.211566 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-05 01:42:28.211573 | orchestrator | Thursday 05 February 2026 01:41:11 +0000 (0:00:34.033) 0:00:38.467 ***** 2026-02-05 01:42:28.211605 | orchestrator | changed: [testbed-manager] 2026-02-05 01:42:28.211612 | orchestrator | 2026-02-05 01:42:28.211619 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-05 01:42:28.211625 | orchestrator | Thursday 05 February 2026 01:41:27 +0000 (0:00:15.697) 0:00:54.164 ***** 2026-02-05 01:42:28.211632 | orchestrator | Pausing for 60 seconds 2026-02-05 01:42:28.211639 | orchestrator | changed: [testbed-manager] 2026-02-05 01:42:28.211646 | orchestrator | 2026-02-05 01:42:28.211653 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-05 01:42:28.211659 | orchestrator | Thursday 05 February 2026 01:42:27 +0000 (0:01:00.079) 0:01:54.243 ***** 2026-02-05 01:42:28.211666 | orchestrator | ok: [testbed-manager] 2026-02-05 01:42:28.211673 | orchestrator | 2026-02-05 01:42:28.211680 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-05 01:42:28.211686 | orchestrator | Thursday 05 February 2026 01:42:27 +0000 (0:00:00.063) 0:01:54.307 ***** 2026-02-05 01:42:28.211693 | orchestrator | changed: [testbed-manager] 2026-02-05 01:42:28.211700 | orchestrator | 2026-02-05 01:42:28.211706 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:42:28.211713 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:42:28.211720 | orchestrator | 2026-02-05 01:42:28.211727 | orchestrator | 2026-02-05 01:42:28.211733 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:42:28.211740 | orchestrator | Thursday 05 February 2026 01:42:28 +0000 (0:00:00.530) 0:01:54.838 ***** 2026-02-05 01:42:28.211747 | orchestrator | =============================================================================== 2026-02-05 01:42:28.211768 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-02-05 01:42:28.211774 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.03s 2026-02-05 01:42:28.211781 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.70s 2026-02-05 01:42:28.211788 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.11s 2026-02-05 01:42:28.211795 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.06s 2026-02-05 01:42:28.211802 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.96s 2026-02-05 01:42:28.211810 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.80s 2026-02-05 01:42:28.211817 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.53s 2026-02-05 01:42:28.211824 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2026-02-05 01:42:28.211831 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-02-05 01:42:28.211839 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-02-05 01:42:28.390655 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-05 01:42:28.390742 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-05 01:42:28.424334 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 01:42:28.424402 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-05 01:42:28.428996 | orchestrator | + set -e 2026-02-05 01:42:28.429112 | orchestrator | + NAMESPACE=kolla/release 2026-02-05 01:42:28.429127 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-05 01:42:28.433876 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-05 01:42:28.493237 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-05 01:42:28.493399 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-05 01:42:40.217598 | orchestrator | 2026-02-05 01:42:40 | INFO  | Task c847d40e-462b-4c55-a3ab-3052d0723797 (operator) was prepared for execution. 2026-02-05 01:42:40.217686 | orchestrator | 2026-02-05 01:42:40 | INFO  | It takes a moment until task c847d40e-462b-4c55-a3ab-3052d0723797 (operator) has been started and output is visible here. 2026-02-05 01:42:56.929312 | orchestrator | 2026-02-05 01:42:56.929413 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-05 01:42:56.929449 | orchestrator | 2026-02-05 01:42:56.929460 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 01:42:56.929469 | orchestrator | Thursday 05 February 2026 01:42:43 +0000 (0:00:00.139) 0:00:00.139 ***** 2026-02-05 01:42:56.929478 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:42:56.929487 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:42:56.929496 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:42:56.929504 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:42:56.929513 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:42:56.929521 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:42:56.929530 | orchestrator | 2026-02-05 01:42:56.929539 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-05 01:42:56.929547 | orchestrator | Thursday 05 February 2026 01:42:48 +0000 (0:00:04.347) 0:00:04.486 ***** 2026-02-05 01:42:56.929556 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:42:56.929564 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:42:56.929572 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:42:56.929581 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:42:56.929590 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:42:56.929598 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:42:56.929606 | orchestrator | 2026-02-05 01:42:56.929615 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-05 01:42:56.929624 | orchestrator | 2026-02-05 01:42:56.929632 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-05 01:42:56.929641 | orchestrator | Thursday 05 February 2026 01:42:49 +0000 (0:00:00.885) 0:00:05.372 ***** 2026-02-05 01:42:56.929650 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:42:56.929658 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:42:56.929667 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:42:56.929675 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:42:56.929683 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:42:56.929692 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:42:56.929701 | orchestrator | 2026-02-05 01:42:56.929710 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-05 01:42:56.929733 | orchestrator | Thursday 05 February 2026 01:42:49 +0000 (0:00:00.171) 0:00:05.544 ***** 2026-02-05 01:42:56.929742 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:42:56.929750 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:42:56.929759 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:42:56.929767 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:42:56.929776 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:42:56.929784 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:42:56.929793 | orchestrator | 2026-02-05 01:42:56.929802 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-05 01:42:56.929810 | orchestrator | Thursday 05 February 2026 01:42:49 +0000 (0:00:00.184) 0:00:05.728 ***** 2026-02-05 01:42:56.929819 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:42:56.929829 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:42:56.929837 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:42:56.929846 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:42:56.929855 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:42:56.929866 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:42:56.929876 | orchestrator | 2026-02-05 01:42:56.929886 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-05 01:42:56.929896 | orchestrator | Thursday 05 February 2026 01:42:50 +0000 (0:00:00.665) 0:00:06.393 ***** 2026-02-05 01:42:56.929906 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:42:56.929915 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:42:56.929925 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:42:56.929935 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:42:56.929945 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:42:56.929954 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:42:56.929964 | orchestrator | 2026-02-05 01:42:56.929974 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-05 01:42:56.929991 | orchestrator | Thursday 05 February 2026 01:42:51 +0000 (0:00:00.785) 0:00:07.179 ***** 2026-02-05 01:42:56.930005 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-05 01:42:56.930105 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-05 01:42:56.930122 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-05 01:42:56.930137 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-05 01:42:56.930152 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-05 01:42:56.930167 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-05 01:42:56.930182 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-05 01:42:56.930198 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-05 01:42:56.930212 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-05 01:42:56.930226 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-05 01:42:56.930240 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-05 01:42:56.930255 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-05 01:42:56.930270 | orchestrator | 2026-02-05 01:42:56.930286 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-05 01:42:56.930301 | orchestrator | Thursday 05 February 2026 01:42:52 +0000 (0:00:01.313) 0:00:08.493 ***** 2026-02-05 01:42:56.930316 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:42:56.930332 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:42:56.930343 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:42:56.930351 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:42:56.930360 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:42:56.930368 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:42:56.930376 | orchestrator | 2026-02-05 01:42:56.930385 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-05 01:42:56.930395 | orchestrator | Thursday 05 February 2026 01:42:53 +0000 (0:00:01.250) 0:00:09.743 ***** 2026-02-05 01:42:56.930404 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-05 01:42:56.930413 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-05 01:42:56.930421 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-05 01:42:56.930430 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 01:42:56.930457 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 01:42:56.930466 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 01:42:56.930475 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 01:42:56.930483 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 01:42:56.930492 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 01:42:56.930500 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-05 01:42:56.930509 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-05 01:42:56.930517 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-05 01:42:56.930525 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-05 01:42:56.930534 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-05 01:42:56.930545 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-05 01:42:56.930558 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-05 01:42:56.930571 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-05 01:42:56.930580 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-05 01:42:56.930589 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-05 01:42:56.930599 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-05 01:42:56.930614 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-05 01:42:56.930631 | orchestrator | 2026-02-05 01:42:56.930640 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-05 01:42:56.930650 | orchestrator | Thursday 05 February 2026 01:42:54 +0000 (0:00:01.350) 0:00:11.094 ***** 2026-02-05 01:42:56.930658 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:42:56.930667 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:42:56.930676 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:42:56.930684 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:42:56.930693 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:42:56.930701 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:42:56.930710 | orchestrator | 2026-02-05 01:42:56.930719 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-05 01:42:56.930727 | orchestrator | Thursday 05 February 2026 01:42:55 +0000 (0:00:00.144) 0:00:11.238 ***** 2026-02-05 01:42:56.930736 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:42:56.930745 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:42:56.930753 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:42:56.930762 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:42:56.930770 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:42:56.930778 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:42:56.930787 | orchestrator | 2026-02-05 01:42:56.930796 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-05 01:42:56.930804 | orchestrator | Thursday 05 February 2026 01:42:55 +0000 (0:00:00.157) 0:00:11.396 ***** 2026-02-05 01:42:56.930813 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:42:56.930821 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:42:56.930830 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:42:56.930838 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:42:56.930847 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:42:56.930855 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:42:56.930864 | orchestrator | 2026-02-05 01:42:56.930872 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-05 01:42:56.930881 | orchestrator | Thursday 05 February 2026 01:42:55 +0000 (0:00:00.564) 0:00:11.960 ***** 2026-02-05 01:42:56.930889 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:42:56.930898 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:42:56.930906 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:42:56.930915 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:42:56.930932 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:42:56.930941 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:42:56.930949 | orchestrator | 2026-02-05 01:42:56.930958 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-05 01:42:56.930966 | orchestrator | Thursday 05 February 2026 01:42:55 +0000 (0:00:00.149) 0:00:12.110 ***** 2026-02-05 01:42:56.930975 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-05 01:42:56.930984 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:42:56.930992 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 01:42:56.931001 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:42:56.931009 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 01:42:56.931018 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:42:56.931026 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 01:42:56.931035 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:42:56.931043 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-05 01:42:56.931052 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:42:56.931060 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 01:42:56.931101 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:42:56.931109 | orchestrator | 2026-02-05 01:42:56.931118 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-05 01:42:56.931127 | orchestrator | Thursday 05 February 2026 01:42:56 +0000 (0:00:00.724) 0:00:12.835 ***** 2026-02-05 01:42:56.931135 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:42:56.931150 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:42:56.931158 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:42:56.931167 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:42:56.931175 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:42:56.931184 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:42:56.931192 | orchestrator | 2026-02-05 01:42:56.931201 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-05 01:42:56.931209 | orchestrator | Thursday 05 February 2026 01:42:56 +0000 (0:00:00.130) 0:00:12.966 ***** 2026-02-05 01:42:56.931218 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:42:56.931227 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:42:56.931235 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:42:56.931243 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:42:56.931259 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:42:58.121806 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:42:58.121899 | orchestrator | 2026-02-05 01:42:58.121913 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-05 01:42:58.121926 | orchestrator | Thursday 05 February 2026 01:42:56 +0000 (0:00:00.124) 0:00:13.090 ***** 2026-02-05 01:42:58.121937 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:42:58.121948 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:42:58.121959 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:42:58.121970 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:42:58.121981 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:42:58.121991 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:42:58.122009 | orchestrator | 2026-02-05 01:42:58.122125 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-05 01:42:58.122146 | orchestrator | Thursday 05 February 2026 01:42:57 +0000 (0:00:00.128) 0:00:13.219 ***** 2026-02-05 01:42:58.122166 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:42:58.122186 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:42:58.122204 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:42:58.122224 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:42:58.122243 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:42:58.122262 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:42:58.122281 | orchestrator | 2026-02-05 01:42:58.122297 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-05 01:42:58.122315 | orchestrator | Thursday 05 February 2026 01:42:57 +0000 (0:00:00.694) 0:00:13.914 ***** 2026-02-05 01:42:58.122333 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:42:58.122351 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:42:58.122371 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:42:58.122392 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:42:58.122413 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:42:58.122433 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:42:58.122448 | orchestrator | 2026-02-05 01:42:58.122461 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:42:58.122495 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:42:58.122510 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:42:58.122523 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:42:58.122535 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:42:58.122549 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:42:58.122562 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:42:58.122598 | orchestrator | 2026-02-05 01:42:58.122611 | orchestrator | 2026-02-05 01:42:58.122624 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:42:58.122636 | orchestrator | Thursday 05 February 2026 01:42:57 +0000 (0:00:00.205) 0:00:14.119 ***** 2026-02-05 01:42:58.122649 | orchestrator | =============================================================================== 2026-02-05 01:42:58.122662 | orchestrator | Gathering Facts --------------------------------------------------------- 4.35s 2026-02-05 01:42:58.122675 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.35s 2026-02-05 01:42:58.122687 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.31s 2026-02-05 01:42:58.122699 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2026-02-05 01:42:58.122712 | orchestrator | Do not require tty for all users ---------------------------------------- 0.89s 2026-02-05 01:42:58.122725 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2026-02-05 01:42:58.122737 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2026-02-05 01:42:58.122750 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2026-02-05 01:42:58.122761 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2026-02-05 01:42:58.122772 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2026-02-05 01:42:58.122783 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-02-05 01:42:58.122793 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-02-05 01:42:58.122804 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-02-05 01:42:58.122815 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-02-05 01:42:58.122825 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-02-05 01:42:58.122836 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-02-05 01:42:58.122847 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2026-02-05 01:42:58.122857 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-02-05 01:42:58.122868 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.12s 2026-02-05 01:42:58.316408 | orchestrator | + osism apply --environment custom facts 2026-02-05 01:43:00.001522 | orchestrator | 2026-02-05 01:42:59 | INFO  | Trying to run play facts in environment custom 2026-02-05 01:43:10.142467 | orchestrator | 2026-02-05 01:43:10 | INFO  | Task 3b327401-a1f7-4adf-a4d4-ffe91d937250 (facts) was prepared for execution. 2026-02-05 01:43:10.142569 | orchestrator | 2026-02-05 01:43:10 | INFO  | It takes a moment until task 3b327401-a1f7-4adf-a4d4-ffe91d937250 (facts) has been started and output is visible here. 2026-02-05 01:43:56.724406 | orchestrator | 2026-02-05 01:43:56.724509 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-05 01:43:56.724526 | orchestrator | 2026-02-05 01:43:56.724538 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-05 01:43:56.724550 | orchestrator | Thursday 05 February 2026 01:43:14 +0000 (0:00:00.088) 0:00:00.088 ***** 2026-02-05 01:43:56.724561 | orchestrator | ok: [testbed-manager] 2026-02-05 01:43:56.724573 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:43:56.724585 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:43:56.724595 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:43:56.724606 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:43:56.724617 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:43:56.724627 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:43:56.724638 | orchestrator | 2026-02-05 01:43:56.724649 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-05 01:43:56.724687 | orchestrator | Thursday 05 February 2026 01:43:15 +0000 (0:00:01.357) 0:00:01.446 ***** 2026-02-05 01:43:56.724698 | orchestrator | ok: [testbed-manager] 2026-02-05 01:43:56.724709 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:43:56.724720 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:43:56.724730 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:43:56.724741 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:43:56.724751 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:43:56.724762 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:43:56.724773 | orchestrator | 2026-02-05 01:43:56.724783 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-05 01:43:56.724794 | orchestrator | 2026-02-05 01:43:56.724805 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-05 01:43:56.724816 | orchestrator | Thursday 05 February 2026 01:43:16 +0000 (0:00:01.215) 0:00:02.661 ***** 2026-02-05 01:43:56.724826 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:43:56.724837 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:43:56.724848 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:43:56.724858 | orchestrator | 2026-02-05 01:43:56.724869 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-05 01:43:56.724881 | orchestrator | Thursday 05 February 2026 01:43:16 +0000 (0:00:00.099) 0:00:02.760 ***** 2026-02-05 01:43:56.724891 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:43:56.724902 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:43:56.724912 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:43:56.724923 | orchestrator | 2026-02-05 01:43:56.724934 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-05 01:43:56.724947 | orchestrator | Thursday 05 February 2026 01:43:16 +0000 (0:00:00.181) 0:00:02.942 ***** 2026-02-05 01:43:56.724960 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:43:56.724973 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:43:56.724986 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:43:56.724998 | orchestrator | 2026-02-05 01:43:56.725011 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-05 01:43:56.725023 | orchestrator | Thursday 05 February 2026 01:43:17 +0000 (0:00:00.191) 0:00:03.133 ***** 2026-02-05 01:43:56.725038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:43:56.725051 | orchestrator | 2026-02-05 01:43:56.725065 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-05 01:43:56.725077 | orchestrator | Thursday 05 February 2026 01:43:17 +0000 (0:00:00.138) 0:00:03.272 ***** 2026-02-05 01:43:56.725089 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:43:56.725101 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:43:56.725113 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:43:56.725126 | orchestrator | 2026-02-05 01:43:56.725139 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-05 01:43:56.725235 | orchestrator | Thursday 05 February 2026 01:43:17 +0000 (0:00:00.420) 0:00:03.692 ***** 2026-02-05 01:43:56.725250 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:43:56.725263 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:43:56.725276 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:43:56.725290 | orchestrator | 2026-02-05 01:43:56.725302 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-05 01:43:56.725315 | orchestrator | Thursday 05 February 2026 01:43:17 +0000 (0:00:00.136) 0:00:03.829 ***** 2026-02-05 01:43:56.725326 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:43:56.725336 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:43:56.725347 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:43:56.725358 | orchestrator | 2026-02-05 01:43:56.725369 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-05 01:43:56.725379 | orchestrator | Thursday 05 February 2026 01:43:18 +0000 (0:00:01.071) 0:00:04.901 ***** 2026-02-05 01:43:56.725400 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:43:56.725411 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:43:56.725422 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:43:56.725432 | orchestrator | 2026-02-05 01:43:56.725490 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-05 01:43:56.725502 | orchestrator | Thursday 05 February 2026 01:43:19 +0000 (0:00:00.473) 0:00:05.374 ***** 2026-02-05 01:43:56.725513 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:43:56.725524 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:43:56.725535 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:43:56.725546 | orchestrator | 2026-02-05 01:43:56.725557 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-05 01:43:56.725568 | orchestrator | Thursday 05 February 2026 01:43:20 +0000 (0:00:01.097) 0:00:06.472 ***** 2026-02-05 01:43:56.725579 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:43:56.725590 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:43:56.725601 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:43:56.725612 | orchestrator | 2026-02-05 01:43:56.725623 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-05 01:43:56.725633 | orchestrator | Thursday 05 February 2026 01:43:37 +0000 (0:00:17.409) 0:00:23.882 ***** 2026-02-05 01:43:56.725644 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:43:56.725655 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:43:56.725666 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:43:56.725677 | orchestrator | 2026-02-05 01:43:56.725688 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-05 01:43:56.725718 | orchestrator | Thursday 05 February 2026 01:43:37 +0000 (0:00:00.103) 0:00:23.985 ***** 2026-02-05 01:43:56.725730 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:43:56.725741 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:43:56.725752 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:43:56.725762 | orchestrator | 2026-02-05 01:43:56.725773 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-05 01:43:56.725784 | orchestrator | Thursday 05 February 2026 01:43:47 +0000 (0:00:09.275) 0:00:33.261 ***** 2026-02-05 01:43:56.725795 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:43:56.725806 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:43:56.725817 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:43:56.725828 | orchestrator | 2026-02-05 01:43:56.725839 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-05 01:43:56.725850 | orchestrator | Thursday 05 February 2026 01:43:47 +0000 (0:00:00.468) 0:00:33.729 ***** 2026-02-05 01:43:56.725861 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-05 01:43:56.725872 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-05 01:43:56.725883 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-05 01:43:56.725894 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-05 01:43:56.725910 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-05 01:43:56.725921 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-05 01:43:56.725932 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-05 01:43:56.725943 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-05 01:43:56.725953 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-05 01:43:56.725964 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-05 01:43:56.725975 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-05 01:43:56.725990 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-05 01:43:56.726009 | orchestrator | 2026-02-05 01:43:56.726109 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-05 01:43:56.726129 | orchestrator | Thursday 05 February 2026 01:43:51 +0000 (0:00:03.740) 0:00:37.470 ***** 2026-02-05 01:43:56.726181 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:43:56.726197 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:43:56.726208 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:43:56.726218 | orchestrator | 2026-02-05 01:43:56.726229 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 01:43:56.726240 | orchestrator | 2026-02-05 01:43:56.726251 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 01:43:56.726264 | orchestrator | Thursday 05 February 2026 01:43:52 +0000 (0:00:01.518) 0:00:38.989 ***** 2026-02-05 01:43:56.726283 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:43:56.726301 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:43:56.726319 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:43:56.726337 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:43:56.726356 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:43:56.726376 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:43:56.726395 | orchestrator | ok: [testbed-manager] 2026-02-05 01:43:56.726413 | orchestrator | 2026-02-05 01:43:56.726431 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:43:56.726452 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:43:56.726471 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:43:56.726492 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:43:56.726510 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:43:56.726530 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:43:56.726548 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:43:56.726567 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:43:56.726585 | orchestrator | 2026-02-05 01:43:56.726603 | orchestrator | 2026-02-05 01:43:56.726619 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:43:56.726637 | orchestrator | Thursday 05 February 2026 01:43:56 +0000 (0:00:03.706) 0:00:42.695 ***** 2026-02-05 01:43:56.726656 | orchestrator | =============================================================================== 2026-02-05 01:43:56.726675 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.41s 2026-02-05 01:43:56.726694 | orchestrator | Install required packages (Debian) -------------------------------------- 9.28s 2026-02-05 01:43:56.726713 | orchestrator | Copy fact files --------------------------------------------------------- 3.74s 2026-02-05 01:43:56.726731 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.71s 2026-02-05 01:43:56.726747 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.52s 2026-02-05 01:43:56.726758 | orchestrator | Create custom facts directory ------------------------------------------- 1.36s 2026-02-05 01:43:56.726781 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-02-05 01:43:56.964337 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2026-02-05 01:43:56.964466 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2026-02-05 01:43:56.964494 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-02-05 01:43:56.964514 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-02-05 01:43:56.964533 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-02-05 01:43:56.964586 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2026-02-05 01:43:56.964607 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-02-05 01:43:56.964625 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-02-05 01:43:56.964644 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-02-05 01:43:56.964664 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-02-05 01:43:56.964701 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-02-05 01:43:57.266802 | orchestrator | + osism apply bootstrap 2026-02-05 01:44:09.375758 | orchestrator | 2026-02-05 01:44:09 | INFO  | Task 5bbe6598-9c00-4a50-866b-5fc8fcac290c (bootstrap) was prepared for execution. 2026-02-05 01:44:09.375871 | orchestrator | 2026-02-05 01:44:09 | INFO  | It takes a moment until task 5bbe6598-9c00-4a50-866b-5fc8fcac290c (bootstrap) has been started and output is visible here. 2026-02-05 01:44:25.121017 | orchestrator | 2026-02-05 01:44:25.121111 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-05 01:44:25.121122 | orchestrator | 2026-02-05 01:44:25.121129 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-05 01:44:25.121136 | orchestrator | Thursday 05 February 2026 01:44:13 +0000 (0:00:00.146) 0:00:00.146 ***** 2026-02-05 01:44:25.121142 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:25.121150 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:25.121156 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:25.121163 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:25.121169 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:44:25.121175 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:44:25.121181 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:44:25.121244 | orchestrator | 2026-02-05 01:44:25.121255 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 01:44:25.121264 | orchestrator | 2026-02-05 01:44:25.121271 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 01:44:25.121277 | orchestrator | Thursday 05 February 2026 01:44:13 +0000 (0:00:00.223) 0:00:00.370 ***** 2026-02-05 01:44:25.121284 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:44:25.121290 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:44:25.121296 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:44:25.121302 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:25.121308 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:25.121314 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:25.121320 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:25.121326 | orchestrator | 2026-02-05 01:44:25.121332 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-05 01:44:25.121339 | orchestrator | 2026-02-05 01:44:25.121345 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 01:44:25.121352 | orchestrator | Thursday 05 February 2026 01:44:17 +0000 (0:00:03.718) 0:00:04.088 ***** 2026-02-05 01:44:25.121359 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-05 01:44:25.121365 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-05 01:44:25.121371 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-05 01:44:25.121377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-05 01:44:25.121384 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-05 01:44:25.121390 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-05 01:44:25.121396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 01:44:25.121402 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-05 01:44:25.121408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 01:44:25.121414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 01:44:25.121441 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-05 01:44:25.121448 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-05 01:44:25.121454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 01:44:25.121460 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-05 01:44:25.121466 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 01:44:25.121472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 01:44:25.121479 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 01:44:25.121485 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 01:44:25.121491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 01:44:25.121497 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 01:44:25.121503 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:44:25.121509 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 01:44:25.121515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 01:44:25.121521 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-05 01:44:25.121527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 01:44:25.121533 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:44:25.121539 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 01:44:25.121545 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 01:44:25.121551 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:44:25.121560 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 01:44:25.121570 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 01:44:25.121581 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-05 01:44:25.121592 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 01:44:25.121603 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 01:44:25.121614 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-05 01:44:25.121624 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-05 01:44:25.121635 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-05 01:44:25.121642 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-05 01:44:25.121650 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 01:44:25.121658 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:44:25.121665 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-05 01:44:25.121673 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-05 01:44:25.121680 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-05 01:44:25.121688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 01:44:25.121695 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-05 01:44:25.121702 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 01:44:25.121723 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 01:44:25.121730 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 01:44:25.121752 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 01:44:25.121760 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 01:44:25.121767 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:44:25.121774 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 01:44:25.121782 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 01:44:25.121789 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:44:25.121796 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 01:44:25.121812 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:44:25.121819 | orchestrator | 2026-02-05 01:44:25.121826 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-05 01:44:25.121832 | orchestrator | 2026-02-05 01:44:25.121838 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-05 01:44:25.121844 | orchestrator | Thursday 05 February 2026 01:44:17 +0000 (0:00:00.440) 0:00:04.529 ***** 2026-02-05 01:44:25.121850 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:25.121856 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:44:25.121862 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:25.121869 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:25.121875 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:44:25.121881 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:44:25.121887 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:25.121893 | orchestrator | 2026-02-05 01:44:25.121899 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-05 01:44:25.121905 | orchestrator | Thursday 05 February 2026 01:44:19 +0000 (0:00:01.290) 0:00:05.819 ***** 2026-02-05 01:44:25.121912 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:25.121918 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:25.121924 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:44:25.121930 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:25.121936 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:25.121942 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:44:25.121950 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:44:25.121960 | orchestrator | 2026-02-05 01:44:25.121969 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-05 01:44:25.121979 | orchestrator | Thursday 05 February 2026 01:44:20 +0000 (0:00:01.208) 0:00:07.028 ***** 2026-02-05 01:44:25.121991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:44:25.122004 | orchestrator | 2026-02-05 01:44:25.122066 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-05 01:44:25.122076 | orchestrator | Thursday 05 February 2026 01:44:20 +0000 (0:00:00.244) 0:00:07.272 ***** 2026-02-05 01:44:25.122082 | orchestrator | changed: [testbed-manager] 2026-02-05 01:44:25.122089 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:44:25.122095 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:44:25.122101 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:44:25.122107 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:44:25.122113 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:44:25.122119 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:44:25.122125 | orchestrator | 2026-02-05 01:44:25.122131 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-05 01:44:25.122138 | orchestrator | Thursday 05 February 2026 01:44:22 +0000 (0:00:02.005) 0:00:09.278 ***** 2026-02-05 01:44:25.122144 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:44:25.122151 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:44:25.122160 | orchestrator | 2026-02-05 01:44:25.122166 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-05 01:44:25.122172 | orchestrator | Thursday 05 February 2026 01:44:22 +0000 (0:00:00.233) 0:00:09.511 ***** 2026-02-05 01:44:25.122178 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:44:25.122212 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:44:25.122219 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:44:25.122225 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:44:25.122231 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:44:25.122238 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:44:25.122244 | orchestrator | 2026-02-05 01:44:25.122250 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-05 01:44:25.122262 | orchestrator | Thursday 05 February 2026 01:44:24 +0000 (0:00:01.085) 0:00:10.597 ***** 2026-02-05 01:44:25.122268 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:44:25.122274 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:44:25.122280 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:44:25.122286 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:44:25.122292 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:44:25.122298 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:44:25.122304 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:44:25.122310 | orchestrator | 2026-02-05 01:44:25.122316 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-05 01:44:25.122322 | orchestrator | Thursday 05 February 2026 01:44:24 +0000 (0:00:00.602) 0:00:11.200 ***** 2026-02-05 01:44:25.122329 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:44:25.122335 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:44:25.122346 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:44:25.122352 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:44:25.122358 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:44:25.122364 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:44:25.122374 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:25.122385 | orchestrator | 2026-02-05 01:44:25.122395 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-05 01:44:25.122407 | orchestrator | Thursday 05 February 2026 01:44:24 +0000 (0:00:00.375) 0:00:11.576 ***** 2026-02-05 01:44:25.122418 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:44:25.122429 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:44:25.122449 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:44:36.746380 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:44:36.746549 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:44:36.746581 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:44:36.746602 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:44:36.746623 | orchestrator | 2026-02-05 01:44:36.746643 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-05 01:44:36.746669 | orchestrator | Thursday 05 February 2026 01:44:25 +0000 (0:00:00.223) 0:00:11.799 ***** 2026-02-05 01:44:36.746697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:44:36.746737 | orchestrator | 2026-02-05 01:44:36.746756 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-05 01:44:36.746774 | orchestrator | Thursday 05 February 2026 01:44:25 +0000 (0:00:00.266) 0:00:12.066 ***** 2026-02-05 01:44:36.746790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:44:36.746807 | orchestrator | 2026-02-05 01:44:36.746824 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-05 01:44:36.746841 | orchestrator | Thursday 05 February 2026 01:44:25 +0000 (0:00:00.256) 0:00:12.323 ***** 2026-02-05 01:44:36.746865 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:36.746888 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:36.746909 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:44:36.746927 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:36.746948 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:44:36.746970 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:36.746990 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:44:36.747010 | orchestrator | 2026-02-05 01:44:36.747031 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-05 01:44:36.747050 | orchestrator | Thursday 05 February 2026 01:44:27 +0000 (0:00:01.635) 0:00:13.959 ***** 2026-02-05 01:44:36.747071 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:44:36.747126 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:44:36.747150 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:44:36.747171 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:44:36.747192 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:44:36.747294 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:44:36.747334 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:44:36.747354 | orchestrator | 2026-02-05 01:44:36.747372 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-05 01:44:36.747391 | orchestrator | Thursday 05 February 2026 01:44:27 +0000 (0:00:00.186) 0:00:14.146 ***** 2026-02-05 01:44:36.747410 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:36.747427 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:36.747445 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:36.747463 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:36.747481 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:44:36.747498 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:44:36.747516 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:44:36.747533 | orchestrator | 2026-02-05 01:44:36.747551 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-05 01:44:36.747570 | orchestrator | Thursday 05 February 2026 01:44:28 +0000 (0:00:00.527) 0:00:14.673 ***** 2026-02-05 01:44:36.747587 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:44:36.747605 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:44:36.747622 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:44:36.747639 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:44:36.747658 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:44:36.747675 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:44:36.747690 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:44:36.747711 | orchestrator | 2026-02-05 01:44:36.747729 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-05 01:44:36.747750 | orchestrator | Thursday 05 February 2026 01:44:28 +0000 (0:00:00.262) 0:00:14.935 ***** 2026-02-05 01:44:36.747771 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:36.747789 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:44:36.747808 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:44:36.747827 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:44:36.747845 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:44:36.747863 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:44:36.747882 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:44:36.747901 | orchestrator | 2026-02-05 01:44:36.747919 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-05 01:44:36.747937 | orchestrator | Thursday 05 February 2026 01:44:28 +0000 (0:00:00.520) 0:00:15.456 ***** 2026-02-05 01:44:36.747957 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:36.747976 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:44:36.747995 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:44:36.748015 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:44:36.748028 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:44:36.748039 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:44:36.748050 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:44:36.748060 | orchestrator | 2026-02-05 01:44:36.748071 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-05 01:44:36.748083 | orchestrator | Thursday 05 February 2026 01:44:29 +0000 (0:00:01.128) 0:00:16.585 ***** 2026-02-05 01:44:36.748094 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:36.748105 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:36.748115 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:44:36.748126 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:36.748137 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:36.748147 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:44:36.748158 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:44:36.748169 | orchestrator | 2026-02-05 01:44:36.748180 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-05 01:44:36.748191 | orchestrator | Thursday 05 February 2026 01:44:31 +0000 (0:00:01.051) 0:00:17.636 ***** 2026-02-05 01:44:36.748303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:44:36.748318 | orchestrator | 2026-02-05 01:44:36.748329 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-05 01:44:36.748341 | orchestrator | Thursday 05 February 2026 01:44:31 +0000 (0:00:00.259) 0:00:17.895 ***** 2026-02-05 01:44:36.748351 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:44:36.748362 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:44:36.748373 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:44:36.748384 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:44:36.748394 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:44:36.748405 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:44:36.748416 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:44:36.748427 | orchestrator | 2026-02-05 01:44:36.748437 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-05 01:44:36.748449 | orchestrator | Thursday 05 February 2026 01:44:32 +0000 (0:00:01.261) 0:00:19.157 ***** 2026-02-05 01:44:36.748459 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:36.748470 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:36.748481 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:36.748491 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:36.748502 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:44:36.748513 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:44:36.748523 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:44:36.748534 | orchestrator | 2026-02-05 01:44:36.748545 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-05 01:44:36.748556 | orchestrator | Thursday 05 February 2026 01:44:32 +0000 (0:00:00.174) 0:00:19.331 ***** 2026-02-05 01:44:36.748567 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:36.748577 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:36.748588 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:36.748598 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:36.748609 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:44:36.748619 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:44:36.748630 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:44:36.748641 | orchestrator | 2026-02-05 01:44:36.748651 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-05 01:44:36.748662 | orchestrator | Thursday 05 February 2026 01:44:32 +0000 (0:00:00.203) 0:00:19.534 ***** 2026-02-05 01:44:36.748673 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:36.748684 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:36.748695 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:36.748705 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:36.748716 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:44:36.748727 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:44:36.748737 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:44:36.748748 | orchestrator | 2026-02-05 01:44:36.748759 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-05 01:44:36.748770 | orchestrator | Thursday 05 February 2026 01:44:33 +0000 (0:00:00.179) 0:00:19.714 ***** 2026-02-05 01:44:36.748782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:44:36.748794 | orchestrator | 2026-02-05 01:44:36.748805 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-05 01:44:36.748816 | orchestrator | Thursday 05 February 2026 01:44:33 +0000 (0:00:00.229) 0:00:19.943 ***** 2026-02-05 01:44:36.748827 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:36.748837 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:36.748848 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:36.748859 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:36.748876 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:44:36.748887 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:44:36.748897 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:44:36.748908 | orchestrator | 2026-02-05 01:44:36.748919 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-05 01:44:36.748930 | orchestrator | Thursday 05 February 2026 01:44:33 +0000 (0:00:00.476) 0:00:20.420 ***** 2026-02-05 01:44:36.748941 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:44:36.748951 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:44:36.748962 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:44:36.748973 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:44:36.748984 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:44:36.748994 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:44:36.749005 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:44:36.749016 | orchestrator | 2026-02-05 01:44:36.749027 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-05 01:44:36.749037 | orchestrator | Thursday 05 February 2026 01:44:34 +0000 (0:00:00.176) 0:00:20.597 ***** 2026-02-05 01:44:36.749048 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:36.749059 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:36.749070 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:36.749080 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:36.749091 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:44:36.749102 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:44:36.749112 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:44:36.749123 | orchestrator | 2026-02-05 01:44:36.749134 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-05 01:44:36.749145 | orchestrator | Thursday 05 February 2026 01:44:35 +0000 (0:00:01.011) 0:00:21.608 ***** 2026-02-05 01:44:36.749155 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:36.749166 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:36.749192 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:36.749229 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:44:36.749241 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:44:36.749251 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:36.749262 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:44:36.749273 | orchestrator | 2026-02-05 01:44:36.749284 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-05 01:44:36.749295 | orchestrator | Thursday 05 February 2026 01:44:35 +0000 (0:00:00.651) 0:00:22.260 ***** 2026-02-05 01:44:36.749306 | orchestrator | ok: [testbed-manager] 2026-02-05 01:44:36.749317 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:44:36.749327 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:44:36.749338 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:44:36.749356 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:45:16.295874 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:45:16.295980 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:45:16.295994 | orchestrator | 2026-02-05 01:45:16.296005 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-05 01:45:16.296015 | orchestrator | Thursday 05 February 2026 01:44:36 +0000 (0:00:01.063) 0:00:23.324 ***** 2026-02-05 01:45:16.296027 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:45:16.296042 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:45:16.296055 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:45:16.296068 | orchestrator | changed: [testbed-manager] 2026-02-05 01:45:16.296080 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:45:16.296094 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:45:16.296108 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:45:16.296122 | orchestrator | 2026-02-05 01:45:16.296136 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-05 01:45:16.296145 | orchestrator | Thursday 05 February 2026 01:44:53 +0000 (0:00:16.800) 0:00:40.125 ***** 2026-02-05 01:45:16.296153 | orchestrator | ok: [testbed-manager] 2026-02-05 01:45:16.296161 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:45:16.296169 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:45:16.296199 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:45:16.296208 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:45:16.296216 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:45:16.296229 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:45:16.296242 | orchestrator | 2026-02-05 01:45:16.296330 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-05 01:45:16.296344 | orchestrator | Thursday 05 February 2026 01:44:53 +0000 (0:00:00.181) 0:00:40.306 ***** 2026-02-05 01:45:16.296356 | orchestrator | ok: [testbed-manager] 2026-02-05 01:45:16.296370 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:45:16.296384 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:45:16.296398 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:45:16.296412 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:45:16.296426 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:45:16.296439 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:45:16.296453 | orchestrator | 2026-02-05 01:45:16.296467 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-05 01:45:16.296481 | orchestrator | Thursday 05 February 2026 01:44:53 +0000 (0:00:00.188) 0:00:40.495 ***** 2026-02-05 01:45:16.296496 | orchestrator | ok: [testbed-manager] 2026-02-05 01:45:16.296510 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:45:16.296520 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:45:16.296530 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:45:16.296539 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:45:16.296551 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:45:16.296565 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:45:16.296580 | orchestrator | 2026-02-05 01:45:16.296593 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-05 01:45:16.296606 | orchestrator | Thursday 05 February 2026 01:44:54 +0000 (0:00:00.180) 0:00:40.675 ***** 2026-02-05 01:45:16.296621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:45:16.296638 | orchestrator | 2026-02-05 01:45:16.296652 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-05 01:45:16.296665 | orchestrator | Thursday 05 February 2026 01:44:54 +0000 (0:00:00.270) 0:00:40.946 ***** 2026-02-05 01:45:16.296678 | orchestrator | ok: [testbed-manager] 2026-02-05 01:45:16.296691 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:45:16.296704 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:45:16.296717 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:45:16.296731 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:45:16.296744 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:45:16.296752 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:45:16.296760 | orchestrator | 2026-02-05 01:45:16.296768 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-05 01:45:16.296776 | orchestrator | Thursday 05 February 2026 01:44:56 +0000 (0:00:02.038) 0:00:42.985 ***** 2026-02-05 01:45:16.296784 | orchestrator | changed: [testbed-manager] 2026-02-05 01:45:16.296792 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:45:16.296800 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:45:16.296808 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:45:16.296816 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:45:16.296824 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:45:16.296832 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:45:16.296839 | orchestrator | 2026-02-05 01:45:16.296847 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-05 01:45:16.296855 | orchestrator | Thursday 05 February 2026 01:44:57 +0000 (0:00:01.138) 0:00:44.123 ***** 2026-02-05 01:45:16.296863 | orchestrator | ok: [testbed-manager] 2026-02-05 01:45:16.296871 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:45:16.296878 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:45:16.296886 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:45:16.296894 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:45:16.296914 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:45:16.296922 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:45:16.296930 | orchestrator | 2026-02-05 01:45:16.296938 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-05 01:45:16.296946 | orchestrator | Thursday 05 February 2026 01:44:58 +0000 (0:00:00.865) 0:00:44.989 ***** 2026-02-05 01:45:16.296970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:45:16.296980 | orchestrator | 2026-02-05 01:45:16.296988 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-05 01:45:16.296997 | orchestrator | Thursday 05 February 2026 01:44:58 +0000 (0:00:00.243) 0:00:45.233 ***** 2026-02-05 01:45:16.297005 | orchestrator | changed: [testbed-manager] 2026-02-05 01:45:16.297013 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:45:16.297020 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:45:16.297028 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:45:16.297036 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:45:16.297044 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:45:16.297052 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:45:16.297059 | orchestrator | 2026-02-05 01:45:16.297085 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-05 01:45:16.297094 | orchestrator | Thursday 05 February 2026 01:44:59 +0000 (0:00:01.038) 0:00:46.271 ***** 2026-02-05 01:45:16.297102 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:45:16.297110 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:45:16.297117 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:45:16.297126 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:45:16.297133 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:45:16.297141 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:45:16.297149 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:45:16.297157 | orchestrator | 2026-02-05 01:45:16.297164 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-05 01:45:16.297172 | orchestrator | Thursday 05 February 2026 01:44:59 +0000 (0:00:00.183) 0:00:46.455 ***** 2026-02-05 01:45:16.297180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:45:16.297189 | orchestrator | 2026-02-05 01:45:16.297196 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-05 01:45:16.297204 | orchestrator | Thursday 05 February 2026 01:45:00 +0000 (0:00:00.252) 0:00:46.707 ***** 2026-02-05 01:45:16.297212 | orchestrator | ok: [testbed-manager] 2026-02-05 01:45:16.297220 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:45:16.297227 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:45:16.297235 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:45:16.297243 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:45:16.297291 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:45:16.297305 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:45:16.297317 | orchestrator | 2026-02-05 01:45:16.297325 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-05 01:45:16.297333 | orchestrator | Thursday 05 February 2026 01:45:01 +0000 (0:00:01.827) 0:00:48.534 ***** 2026-02-05 01:45:16.297341 | orchestrator | changed: [testbed-manager] 2026-02-05 01:45:16.297349 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:45:16.297357 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:45:16.297364 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:45:16.297372 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:45:16.297380 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:45:16.297388 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:45:16.297396 | orchestrator | 2026-02-05 01:45:16.297404 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-05 01:45:16.297426 | orchestrator | Thursday 05 February 2026 01:45:03 +0000 (0:00:01.101) 0:00:49.636 ***** 2026-02-05 01:45:16.297439 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:45:16.297452 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:45:16.297465 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:45:16.297478 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:45:16.297490 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:45:16.297502 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:45:16.297515 | orchestrator | changed: [testbed-manager] 2026-02-05 01:45:16.297529 | orchestrator | 2026-02-05 01:45:16.297542 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-05 01:45:16.297556 | orchestrator | Thursday 05 February 2026 01:45:13 +0000 (0:00:10.842) 0:01:00.479 ***** 2026-02-05 01:45:16.297564 | orchestrator | ok: [testbed-manager] 2026-02-05 01:45:16.297572 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:45:16.297580 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:45:16.297588 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:45:16.297596 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:45:16.297603 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:45:16.297611 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:45:16.297619 | orchestrator | 2026-02-05 01:45:16.297627 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-05 01:45:16.297635 | orchestrator | Thursday 05 February 2026 01:45:14 +0000 (0:00:00.754) 0:01:01.233 ***** 2026-02-05 01:45:16.297643 | orchestrator | ok: [testbed-manager] 2026-02-05 01:45:16.297650 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:45:16.297658 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:45:16.297666 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:45:16.297673 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:45:16.297681 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:45:16.297689 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:45:16.297696 | orchestrator | 2026-02-05 01:45:16.297704 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-05 01:45:16.297712 | orchestrator | Thursday 05 February 2026 01:45:15 +0000 (0:00:00.910) 0:01:02.144 ***** 2026-02-05 01:45:16.297720 | orchestrator | ok: [testbed-manager] 2026-02-05 01:45:16.297728 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:45:16.297736 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:45:16.297743 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:45:16.297751 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:45:16.297759 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:45:16.297767 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:45:16.297774 | orchestrator | 2026-02-05 01:45:16.297782 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-05 01:45:16.297790 | orchestrator | Thursday 05 February 2026 01:45:15 +0000 (0:00:00.210) 0:01:02.355 ***** 2026-02-05 01:45:16.297798 | orchestrator | ok: [testbed-manager] 2026-02-05 01:45:16.297806 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:45:16.297814 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:45:16.297821 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:45:16.297829 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:45:16.297837 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:45:16.297845 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:45:16.297852 | orchestrator | 2026-02-05 01:45:16.297866 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-05 01:45:16.297874 | orchestrator | Thursday 05 February 2026 01:45:15 +0000 (0:00:00.220) 0:01:02.575 ***** 2026-02-05 01:45:16.297888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:45:16.297901 | orchestrator | 2026-02-05 01:45:16.297927 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-05 01:47:23.906970 | orchestrator | Thursday 05 February 2026 01:45:16 +0000 (0:00:00.302) 0:01:02.877 ***** 2026-02-05 01:47:23.907090 | orchestrator | ok: [testbed-manager] 2026-02-05 01:47:23.907113 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:47:23.907120 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:47:23.907126 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:47:23.907133 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:47:23.907139 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:47:23.907146 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:47:23.907152 | orchestrator | 2026-02-05 01:47:23.907159 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-05 01:47:23.907166 | orchestrator | Thursday 05 February 2026 01:45:18 +0000 (0:00:01.852) 0:01:04.730 ***** 2026-02-05 01:47:23.907172 | orchestrator | changed: [testbed-manager] 2026-02-05 01:47:23.907180 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:47:23.907186 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:47:23.907193 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:47:23.907199 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:47:23.907205 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:47:23.907211 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:47:23.907217 | orchestrator | 2026-02-05 01:47:23.907224 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-05 01:47:23.907231 | orchestrator | Thursday 05 February 2026 01:45:18 +0000 (0:00:00.586) 0:01:05.317 ***** 2026-02-05 01:47:23.907238 | orchestrator | ok: [testbed-manager] 2026-02-05 01:47:23.907244 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:47:23.907250 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:47:23.907256 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:47:23.907263 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:47:23.907268 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:47:23.907274 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:47:23.907280 | orchestrator | 2026-02-05 01:47:23.907286 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-05 01:47:23.907293 | orchestrator | Thursday 05 February 2026 01:45:18 +0000 (0:00:00.254) 0:01:05.571 ***** 2026-02-05 01:47:23.907300 | orchestrator | ok: [testbed-manager] 2026-02-05 01:47:23.907306 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:47:23.907312 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:47:23.907319 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:47:23.907325 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:47:23.907331 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:47:23.907337 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:47:23.907344 | orchestrator | 2026-02-05 01:47:23.907350 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-05 01:47:23.907356 | orchestrator | Thursday 05 February 2026 01:45:20 +0000 (0:00:01.435) 0:01:07.007 ***** 2026-02-05 01:47:23.907363 | orchestrator | changed: [testbed-manager] 2026-02-05 01:47:23.907369 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:47:23.907457 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:47:23.907464 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:47:23.907471 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:47:23.907477 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:47:23.907483 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:47:23.907489 | orchestrator | 2026-02-05 01:47:23.907507 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-05 01:47:23.907524 | orchestrator | Thursday 05 February 2026 01:45:22 +0000 (0:00:02.169) 0:01:09.177 ***** 2026-02-05 01:47:23.907531 | orchestrator | ok: [testbed-manager] 2026-02-05 01:47:23.907538 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:47:23.907544 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:47:23.907551 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:47:23.907557 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:47:23.907564 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:47:23.907570 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:47:23.907576 | orchestrator | 2026-02-05 01:47:23.907582 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-05 01:47:23.907589 | orchestrator | Thursday 05 February 2026 01:45:25 +0000 (0:00:03.059) 0:01:12.236 ***** 2026-02-05 01:47:23.907606 | orchestrator | ok: [testbed-manager] 2026-02-05 01:47:23.907613 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:47:23.907619 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:47:23.907626 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:47:23.907633 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:47:23.907639 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:47:23.907646 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:47:23.907652 | orchestrator | 2026-02-05 01:47:23.907659 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-05 01:47:23.907666 | orchestrator | Thursday 05 February 2026 01:46:02 +0000 (0:00:36.690) 0:01:48.927 ***** 2026-02-05 01:47:23.907673 | orchestrator | changed: [testbed-manager] 2026-02-05 01:47:23.907680 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:47:23.907687 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:47:23.907692 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:47:23.907699 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:47:23.907704 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:47:23.907709 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:47:23.907715 | orchestrator | 2026-02-05 01:47:23.907722 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-05 01:47:23.907728 | orchestrator | Thursday 05 February 2026 01:47:16 +0000 (0:01:13.673) 0:03:02.601 ***** 2026-02-05 01:47:23.907736 | orchestrator | ok: [testbed-manager] 2026-02-05 01:47:23.907744 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:47:23.907750 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:47:23.907757 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:47:23.907763 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:47:23.907768 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:47:23.907774 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:47:23.907780 | orchestrator | 2026-02-05 01:47:23.907787 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-05 01:47:23.907793 | orchestrator | Thursday 05 February 2026 01:47:18 +0000 (0:00:02.159) 0:03:04.760 ***** 2026-02-05 01:47:23.907799 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:47:23.907805 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:47:23.907811 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:47:23.907818 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:47:23.907824 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:47:23.907830 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:47:23.907837 | orchestrator | changed: [testbed-manager] 2026-02-05 01:47:23.907843 | orchestrator | 2026-02-05 01:47:23.907849 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-05 01:47:23.907855 | orchestrator | Thursday 05 February 2026 01:47:22 +0000 (0:00:04.495) 0:03:09.256 ***** 2026-02-05 01:47:23.907898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-05 01:47:23.907926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-05 01:47:23.907936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-05 01:47:23.907951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-05 01:47:23.907959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-05 01:47:23.907966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-05 01:47:23.907973 | orchestrator | 2026-02-05 01:47:23.907980 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-05 01:47:23.907987 | orchestrator | Thursday 05 February 2026 01:47:23 +0000 (0:00:00.398) 0:03:09.654 ***** 2026-02-05 01:47:23.907993 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-05 01:47:23.908000 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:47:23.908006 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-05 01:47:23.908013 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-05 01:47:23.908020 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:47:23.908026 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:47:23.908033 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-05 01:47:23.908039 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:47:23.908046 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 01:47:23.908051 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 01:47:23.908058 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 01:47:23.908064 | orchestrator | 2026-02-05 01:47:23.908070 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-05 01:47:23.908077 | orchestrator | Thursday 05 February 2026 01:47:23 +0000 (0:00:00.716) 0:03:10.371 ***** 2026-02-05 01:47:23.908088 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-05 01:47:23.908096 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-05 01:47:23.908103 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-05 01:47:23.908110 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-05 01:47:23.908116 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-05 01:47:23.908131 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-05 01:47:31.336598 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-05 01:47:31.336679 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-05 01:47:31.336687 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-05 01:47:31.336710 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-05 01:47:31.336719 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-05 01:47:31.336731 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-05 01:47:31.336741 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-05 01:47:31.336750 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-05 01:47:31.336758 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-05 01:47:31.336766 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-05 01:47:31.336775 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-05 01:47:31.336784 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-05 01:47:31.336792 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-05 01:47:31.336800 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-05 01:47:31.336807 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-05 01:47:31.336814 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-05 01:47:31.336822 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-05 01:47:31.336830 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-05 01:47:31.336838 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-05 01:47:31.336847 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-05 01:47:31.336856 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:47:31.336866 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-05 01:47:31.336875 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-05 01:47:31.336883 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-05 01:47:31.336891 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-05 01:47:31.336900 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-05 01:47:31.336909 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-05 01:47:31.336915 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:47:31.336920 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-05 01:47:31.336925 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-05 01:47:31.336930 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-05 01:47:31.336936 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-05 01:47:31.336941 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-05 01:47:31.336948 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-05 01:47:31.336955 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-05 01:47:31.336963 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-05 01:47:31.336980 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:47:31.336988 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:47:31.337009 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-05 01:47:31.337015 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-05 01:47:31.337020 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-05 01:47:31.337025 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-05 01:47:31.337030 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-05 01:47:31.337049 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-05 01:47:31.337072 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-05 01:47:31.337077 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-05 01:47:31.337083 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-05 01:47:31.337088 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-05 01:47:31.337093 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-05 01:47:31.337098 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-05 01:47:31.337103 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-05 01:47:31.337108 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-05 01:47:31.337113 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-05 01:47:31.337118 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-05 01:47:31.337123 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-05 01:47:31.337128 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-05 01:47:31.337133 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-05 01:47:31.337138 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-05 01:47:31.337143 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-05 01:47:31.337162 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-05 01:47:31.337168 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-05 01:47:31.337174 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-05 01:47:31.337180 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-05 01:47:31.337186 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-05 01:47:31.337192 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-05 01:47:31.337197 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-05 01:47:31.337203 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-05 01:47:31.337209 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-05 01:47:31.337216 | orchestrator | 2026-02-05 01:47:31.337223 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-05 01:47:31.337234 | orchestrator | Thursday 05 February 2026 01:47:30 +0000 (0:00:06.336) 0:03:16.707 ***** 2026-02-05 01:47:31.337248 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 01:47:31.337254 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 01:47:31.337260 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 01:47:31.337266 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 01:47:31.337272 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 01:47:31.337277 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 01:47:31.337283 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 01:47:31.337289 | orchestrator | 2026-02-05 01:47:31.337294 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-05 01:47:31.337300 | orchestrator | Thursday 05 February 2026 01:47:30 +0000 (0:00:00.674) 0:03:17.381 ***** 2026-02-05 01:47:31.337306 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 01:47:31.337311 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:47:31.337317 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 01:47:31.337326 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 01:47:31.337332 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:47:31.337338 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:47:31.337344 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 01:47:31.337350 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:47:31.337356 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 01:47:31.337362 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 01:47:31.337372 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 01:47:45.684583 | orchestrator | 2026-02-05 01:47:45.684687 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-05 01:47:45.684703 | orchestrator | Thursday 05 February 2026 01:47:31 +0000 (0:00:00.536) 0:03:17.918 ***** 2026-02-05 01:47:45.684714 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 01:47:45.684726 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 01:47:45.684736 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:47:45.684748 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 01:47:45.684757 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:47:45.684767 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 01:47:45.684777 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:47:45.684787 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:47:45.684796 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 01:47:45.684806 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 01:47:45.684816 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 01:47:45.684826 | orchestrator | 2026-02-05 01:47:45.684836 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-05 01:47:45.684846 | orchestrator | Thursday 05 February 2026 01:47:32 +0000 (0:00:01.663) 0:03:19.582 ***** 2026-02-05 01:47:45.684878 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-05 01:47:45.684889 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:47:45.684898 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-05 01:47:45.684908 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-05 01:47:45.684918 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:47:45.684927 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:47:45.684937 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-05 01:47:45.684946 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:47:45.684956 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-05 01:47:45.684966 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-05 01:47:45.684975 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-05 01:47:45.684985 | orchestrator | 2026-02-05 01:47:45.684996 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-05 01:47:45.685008 | orchestrator | Thursday 05 February 2026 01:47:33 +0000 (0:00:00.617) 0:03:20.199 ***** 2026-02-05 01:47:45.685019 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:47:45.685031 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:47:45.685043 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:47:45.685054 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:47:45.685064 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:47:45.685076 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:47:45.685086 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:47:45.685098 | orchestrator | 2026-02-05 01:47:45.685109 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-05 01:47:45.685121 | orchestrator | Thursday 05 February 2026 01:47:33 +0000 (0:00:00.282) 0:03:20.482 ***** 2026-02-05 01:47:45.685132 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:47:45.685143 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:47:45.685155 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:47:45.685166 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:47:45.685177 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:47:45.685187 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:47:45.685198 | orchestrator | ok: [testbed-manager] 2026-02-05 01:47:45.685210 | orchestrator | 2026-02-05 01:47:45.685221 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-05 01:47:45.685232 | orchestrator | Thursday 05 February 2026 01:47:39 +0000 (0:00:05.288) 0:03:25.771 ***** 2026-02-05 01:47:45.685243 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-05 01:47:45.685255 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-05 01:47:45.685266 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:47:45.685277 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-05 01:47:45.685288 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:47:45.685298 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-05 01:47:45.685309 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:47:45.685320 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:47:45.685331 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-05 01:47:45.685359 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-05 01:47:45.685371 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:47:45.685410 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:47:45.685429 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-05 01:47:45.685446 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:47:45.685461 | orchestrator | 2026-02-05 01:47:45.685476 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-05 01:47:45.685506 | orchestrator | Thursday 05 February 2026 01:47:39 +0000 (0:00:00.284) 0:03:26.056 ***** 2026-02-05 01:47:45.685522 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-05 01:47:45.685539 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-05 01:47:45.685555 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-05 01:47:45.685592 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-05 01:47:45.685608 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-05 01:47:45.685618 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-05 01:47:45.685627 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-05 01:47:45.685637 | orchestrator | 2026-02-05 01:47:45.685647 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-05 01:47:45.685656 | orchestrator | Thursday 05 February 2026 01:47:40 +0000 (0:00:01.089) 0:03:27.145 ***** 2026-02-05 01:47:45.685668 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:47:45.685680 | orchestrator | 2026-02-05 01:47:45.685690 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-05 01:47:45.685699 | orchestrator | Thursday 05 February 2026 01:47:41 +0000 (0:00:00.539) 0:03:27.685 ***** 2026-02-05 01:47:45.685709 | orchestrator | ok: [testbed-manager] 2026-02-05 01:47:45.685718 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:47:45.685728 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:47:45.685737 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:47:45.685747 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:47:45.685756 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:47:45.685766 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:47:45.685775 | orchestrator | 2026-02-05 01:47:45.685785 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-05 01:47:45.685794 | orchestrator | Thursday 05 February 2026 01:47:42 +0000 (0:00:01.655) 0:03:29.340 ***** 2026-02-05 01:47:45.685804 | orchestrator | ok: [testbed-manager] 2026-02-05 01:47:45.685813 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:47:45.685822 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:47:45.685832 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:47:45.685841 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:47:45.685850 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:47:45.685860 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:47:45.685869 | orchestrator | 2026-02-05 01:47:45.685879 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-05 01:47:45.685888 | orchestrator | Thursday 05 February 2026 01:47:43 +0000 (0:00:00.637) 0:03:29.978 ***** 2026-02-05 01:47:45.685898 | orchestrator | changed: [testbed-manager] 2026-02-05 01:47:45.685908 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:47:45.685917 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:47:45.685926 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:47:45.685936 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:47:45.685945 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:47:45.685955 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:47:45.685964 | orchestrator | 2026-02-05 01:47:45.685974 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-05 01:47:45.685984 | orchestrator | Thursday 05 February 2026 01:47:43 +0000 (0:00:00.602) 0:03:30.580 ***** 2026-02-05 01:47:45.685993 | orchestrator | ok: [testbed-manager] 2026-02-05 01:47:45.686003 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:47:45.686012 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:47:45.686083 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:47:45.686093 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:47:45.686102 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:47:45.686112 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:47:45.686122 | orchestrator | 2026-02-05 01:47:45.686132 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-05 01:47:45.686141 | orchestrator | Thursday 05 February 2026 01:47:44 +0000 (0:00:00.667) 0:03:31.248 ***** 2026-02-05 01:47:45.686163 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770254670.2081766, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:45.686176 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770254675.466444, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:45.686194 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770254682.9195492, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:45.686227 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770254675.112157, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:50.786378 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770254688.8987665, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:50.786533 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770254690.431373, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:50.786561 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770254684.6655421, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:50.786611 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:50.786629 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:50.786675 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:50.786695 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:50.786738 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:50.786762 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:50.786783 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:47:50.786813 | orchestrator | 2026-02-05 01:47:50.786831 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-05 01:47:50.786849 | orchestrator | Thursday 05 February 2026 01:47:45 +0000 (0:00:01.017) 0:03:32.265 ***** 2026-02-05 01:47:50.786866 | orchestrator | changed: [testbed-manager] 2026-02-05 01:47:50.786885 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:47:50.786903 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:47:50.786919 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:47:50.786934 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:47:50.786947 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:47:50.786959 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:47:50.786970 | orchestrator | 2026-02-05 01:47:50.786982 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-05 01:47:50.786994 | orchestrator | Thursday 05 February 2026 01:47:46 +0000 (0:00:01.156) 0:03:33.422 ***** 2026-02-05 01:47:50.787005 | orchestrator | changed: [testbed-manager] 2026-02-05 01:47:50.787019 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:47:50.787036 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:47:50.787052 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:47:50.787070 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:47:50.787085 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:47:50.787101 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:47:50.787117 | orchestrator | 2026-02-05 01:47:50.787136 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-05 01:47:50.787154 | orchestrator | Thursday 05 February 2026 01:47:48 +0000 (0:00:01.223) 0:03:34.646 ***** 2026-02-05 01:47:50.787171 | orchestrator | changed: [testbed-manager] 2026-02-05 01:47:50.787188 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:47:50.787198 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:47:50.787208 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:47:50.787217 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:47:50.787231 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:47:50.787247 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:47:50.787262 | orchestrator | 2026-02-05 01:47:50.787278 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-05 01:47:50.787293 | orchestrator | Thursday 05 February 2026 01:47:49 +0000 (0:00:01.257) 0:03:35.904 ***** 2026-02-05 01:47:50.787307 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:47:50.787322 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:47:50.787347 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:47:50.787363 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:47:50.787378 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:47:50.787420 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:47:50.787438 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:47:50.787453 | orchestrator | 2026-02-05 01:47:50.787469 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-05 01:47:50.787484 | orchestrator | Thursday 05 February 2026 01:47:49 +0000 (0:00:00.275) 0:03:36.180 ***** 2026-02-05 01:47:50.787499 | orchestrator | ok: [testbed-manager] 2026-02-05 01:47:50.787515 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:47:50.787529 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:47:50.787543 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:47:50.787559 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:47:50.787574 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:47:50.787589 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:47:50.787604 | orchestrator | 2026-02-05 01:47:50.787619 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-05 01:47:50.787635 | orchestrator | Thursday 05 February 2026 01:47:50 +0000 (0:00:00.778) 0:03:36.958 ***** 2026-02-05 01:47:50.787652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:47:50.787686 | orchestrator | 2026-02-05 01:47:50.787703 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-05 01:47:50.787738 | orchestrator | Thursday 05 February 2026 01:47:50 +0000 (0:00:00.406) 0:03:37.365 ***** 2026-02-05 01:49:06.963042 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:06.963157 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:49:06.963175 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:49:06.963188 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:49:06.963199 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:49:06.963212 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:49:06.963223 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:49:06.963235 | orchestrator | 2026-02-05 01:49:06.963247 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-05 01:49:06.963260 | orchestrator | Thursday 05 February 2026 01:48:00 +0000 (0:00:09.296) 0:03:46.662 ***** 2026-02-05 01:49:06.963272 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:06.963286 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:49:06.963297 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:49:06.963308 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:49:06.963319 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:49:06.963339 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:49:06.963351 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:49:06.963364 | orchestrator | 2026-02-05 01:49:06.963377 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-05 01:49:06.963395 | orchestrator | Thursday 05 February 2026 01:48:01 +0000 (0:00:01.561) 0:03:48.224 ***** 2026-02-05 01:49:06.963408 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:06.963420 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:49:06.963430 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:49:06.963441 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:49:06.963510 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:49:06.963523 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:49:06.963534 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:49:06.963546 | orchestrator | 2026-02-05 01:49:06.963558 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-05 01:49:06.963570 | orchestrator | Thursday 05 February 2026 01:48:02 +0000 (0:00:01.185) 0:03:49.409 ***** 2026-02-05 01:49:06.963581 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:06.963593 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:49:06.963605 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:49:06.963617 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:49:06.963629 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:49:06.963648 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:49:06.963662 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:49:06.963675 | orchestrator | 2026-02-05 01:49:06.963687 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-05 01:49:06.963700 | orchestrator | Thursday 05 February 2026 01:48:03 +0000 (0:00:00.304) 0:03:49.714 ***** 2026-02-05 01:49:06.963713 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:06.963730 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:49:06.963745 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:49:06.963757 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:49:06.963769 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:49:06.963780 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:49:06.963793 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:49:06.963809 | orchestrator | 2026-02-05 01:49:06.963825 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-05 01:49:06.963837 | orchestrator | Thursday 05 February 2026 01:48:03 +0000 (0:00:00.321) 0:03:50.035 ***** 2026-02-05 01:49:06.963849 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:06.963861 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:49:06.963873 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:49:06.963888 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:49:06.963904 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:49:06.963945 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:49:06.963958 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:49:06.963977 | orchestrator | 2026-02-05 01:49:06.963990 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-05 01:49:06.964002 | orchestrator | Thursday 05 February 2026 01:48:03 +0000 (0:00:00.287) 0:03:50.323 ***** 2026-02-05 01:49:06.964013 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:49:06.964025 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:49:06.964036 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:49:06.964046 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:49:06.964063 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:49:06.964077 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:49:06.964089 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:06.964108 | orchestrator | 2026-02-05 01:49:06.964121 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-05 01:49:06.964133 | orchestrator | Thursday 05 February 2026 01:48:09 +0000 (0:00:05.608) 0:03:55.931 ***** 2026-02-05 01:49:06.964147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:49:06.964163 | orchestrator | 2026-02-05 01:49:06.964175 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-05 01:49:06.964187 | orchestrator | Thursday 05 February 2026 01:48:09 +0000 (0:00:00.388) 0:03:56.319 ***** 2026-02-05 01:49:06.964200 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-05 01:49:06.964211 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-05 01:49:06.964224 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-05 01:49:06.964233 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:49:06.964240 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-05 01:49:06.964264 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-05 01:49:06.964272 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:49:06.964279 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-05 01:49:06.964286 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-05 01:49:06.964293 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:49:06.964300 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-05 01:49:06.964308 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-05 01:49:06.964315 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-05 01:49:06.964322 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:49:06.964329 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:49:06.964337 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-05 01:49:06.964362 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-05 01:49:06.964370 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:49:06.964377 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-05 01:49:06.964384 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-05 01:49:06.964391 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:49:06.964398 | orchestrator | 2026-02-05 01:49:06.964406 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-05 01:49:06.964413 | orchestrator | Thursday 05 February 2026 01:48:10 +0000 (0:00:00.344) 0:03:56.664 ***** 2026-02-05 01:49:06.964420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:49:06.964428 | orchestrator | 2026-02-05 01:49:06.964435 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-05 01:49:06.964442 | orchestrator | Thursday 05 February 2026 01:48:10 +0000 (0:00:00.409) 0:03:57.073 ***** 2026-02-05 01:49:06.964479 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-05 01:49:06.964488 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-05 01:49:06.964495 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:49:06.964503 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-05 01:49:06.964510 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:49:06.964517 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-05 01:49:06.964524 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:49:06.964531 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-05 01:49:06.964538 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:49:06.964546 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-05 01:49:06.964553 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:49:06.964560 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:49:06.964567 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-05 01:49:06.964574 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:49:06.964581 | orchestrator | 2026-02-05 01:49:06.964588 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-05 01:49:06.964595 | orchestrator | Thursday 05 February 2026 01:48:10 +0000 (0:00:00.303) 0:03:57.377 ***** 2026-02-05 01:49:06.964603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:49:06.964610 | orchestrator | 2026-02-05 01:49:06.964617 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-05 01:49:06.964625 | orchestrator | Thursday 05 February 2026 01:48:11 +0000 (0:00:00.405) 0:03:57.782 ***** 2026-02-05 01:49:06.964632 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:49:06.964639 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:49:06.964646 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:49:06.964653 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:49:06.964661 | orchestrator | changed: [testbed-manager] 2026-02-05 01:49:06.964668 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:49:06.964675 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:49:06.964682 | orchestrator | 2026-02-05 01:49:06.964689 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-05 01:49:06.964696 | orchestrator | Thursday 05 February 2026 01:48:41 +0000 (0:00:30.656) 0:04:28.439 ***** 2026-02-05 01:49:06.964704 | orchestrator | changed: [testbed-manager] 2026-02-05 01:49:06.964711 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:49:06.964718 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:49:06.964725 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:49:06.964732 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:49:06.964739 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:49:06.964746 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:49:06.964754 | orchestrator | 2026-02-05 01:49:06.964775 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-05 01:49:06.964787 | orchestrator | Thursday 05 February 2026 01:48:50 +0000 (0:00:08.423) 0:04:36.863 ***** 2026-02-05 01:49:06.964794 | orchestrator | changed: [testbed-manager] 2026-02-05 01:49:06.964801 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:49:06.964809 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:49:06.964816 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:49:06.964823 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:49:06.964830 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:49:06.964837 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:49:06.964844 | orchestrator | 2026-02-05 01:49:06.964852 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-05 01:49:06.964859 | orchestrator | Thursday 05 February 2026 01:48:58 +0000 (0:00:08.091) 0:04:44.954 ***** 2026-02-05 01:49:06.964871 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:06.964879 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:49:06.964886 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:49:06.964893 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:49:06.964900 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:49:06.964908 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:49:06.964915 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:49:06.964922 | orchestrator | 2026-02-05 01:49:06.964929 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-05 01:49:06.964936 | orchestrator | Thursday 05 February 2026 01:49:00 +0000 (0:00:01.923) 0:04:46.877 ***** 2026-02-05 01:49:06.964943 | orchestrator | changed: [testbed-manager] 2026-02-05 01:49:06.964951 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:49:06.964958 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:49:06.964965 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:49:06.964972 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:49:06.964980 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:49:06.964987 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:49:06.964994 | orchestrator | 2026-02-05 01:49:06.965006 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-05 01:49:17.525370 | orchestrator | Thursday 05 February 2026 01:49:06 +0000 (0:00:06.657) 0:04:53.535 ***** 2026-02-05 01:49:17.525540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:49:17.525571 | orchestrator | 2026-02-05 01:49:17.525592 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-05 01:49:17.525617 | orchestrator | Thursday 05 February 2026 01:49:07 +0000 (0:00:00.408) 0:04:53.944 ***** 2026-02-05 01:49:17.525644 | orchestrator | changed: [testbed-manager] 2026-02-05 01:49:17.525663 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:49:17.525681 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:49:17.525699 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:49:17.525716 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:49:17.525736 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:49:17.525754 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:49:17.525775 | orchestrator | 2026-02-05 01:49:17.525794 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-05 01:49:17.525813 | orchestrator | Thursday 05 February 2026 01:49:08 +0000 (0:00:00.660) 0:04:54.604 ***** 2026-02-05 01:49:17.525832 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:17.525845 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:49:17.525856 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:49:17.525871 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:49:17.525891 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:49:17.525909 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:49:17.525928 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:49:17.525946 | orchestrator | 2026-02-05 01:49:17.525965 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-05 01:49:17.525983 | orchestrator | Thursday 05 February 2026 01:49:09 +0000 (0:00:01.975) 0:04:56.580 ***** 2026-02-05 01:49:17.526003 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:49:17.526095 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:49:17.526120 | orchestrator | changed: [testbed-manager] 2026-02-05 01:49:17.526139 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:49:17.526157 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:49:17.526169 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:49:17.526181 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:49:17.526192 | orchestrator | 2026-02-05 01:49:17.526203 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-05 01:49:17.526215 | orchestrator | Thursday 05 February 2026 01:49:10 +0000 (0:00:00.750) 0:04:57.330 ***** 2026-02-05 01:49:17.526226 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:49:17.526236 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:49:17.526277 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:49:17.526288 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:49:17.526299 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:49:17.526310 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:49:17.526321 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:49:17.526331 | orchestrator | 2026-02-05 01:49:17.526342 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-05 01:49:17.526353 | orchestrator | Thursday 05 February 2026 01:49:10 +0000 (0:00:00.229) 0:04:57.560 ***** 2026-02-05 01:49:17.526364 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:49:17.526374 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:49:17.526385 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:49:17.526396 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:49:17.526406 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:49:17.526417 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:49:17.526427 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:49:17.526438 | orchestrator | 2026-02-05 01:49:17.526449 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-05 01:49:17.526515 | orchestrator | Thursday 05 February 2026 01:49:11 +0000 (0:00:00.348) 0:04:57.908 ***** 2026-02-05 01:49:17.526529 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:17.526540 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:49:17.526551 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:49:17.526562 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:49:17.526572 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:49:17.526583 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:49:17.526594 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:49:17.526604 | orchestrator | 2026-02-05 01:49:17.526615 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-05 01:49:17.526641 | orchestrator | Thursday 05 February 2026 01:49:11 +0000 (0:00:00.237) 0:04:58.146 ***** 2026-02-05 01:49:17.526652 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:49:17.526663 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:49:17.526674 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:49:17.526685 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:49:17.526696 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:49:17.526706 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:49:17.526717 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:49:17.526728 | orchestrator | 2026-02-05 01:49:17.526739 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-05 01:49:17.526751 | orchestrator | Thursday 05 February 2026 01:49:11 +0000 (0:00:00.236) 0:04:58.382 ***** 2026-02-05 01:49:17.526762 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:17.526774 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:49:17.526793 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:49:17.526811 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:49:17.526829 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:49:17.526848 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:49:17.526866 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:49:17.526883 | orchestrator | 2026-02-05 01:49:17.526902 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-05 01:49:17.526921 | orchestrator | Thursday 05 February 2026 01:49:12 +0000 (0:00:00.242) 0:04:58.624 ***** 2026-02-05 01:49:17.526940 | orchestrator | ok: [testbed-manager] =>  2026-02-05 01:49:17.526960 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 01:49:17.526979 | orchestrator | ok: [testbed-node-3] =>  2026-02-05 01:49:17.526997 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 01:49:17.527016 | orchestrator | ok: [testbed-node-4] =>  2026-02-05 01:49:17.527034 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 01:49:17.527053 | orchestrator | ok: [testbed-node-5] =>  2026-02-05 01:49:17.527072 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 01:49:17.527117 | orchestrator | ok: [testbed-node-0] =>  2026-02-05 01:49:17.527135 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 01:49:17.527164 | orchestrator | ok: [testbed-node-1] =>  2026-02-05 01:49:17.527175 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 01:49:17.527186 | orchestrator | ok: [testbed-node-2] =>  2026-02-05 01:49:17.527197 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 01:49:17.527207 | orchestrator | 2026-02-05 01:49:17.527218 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-05 01:49:17.527229 | orchestrator | Thursday 05 February 2026 01:49:12 +0000 (0:00:00.241) 0:04:58.866 ***** 2026-02-05 01:49:17.527240 | orchestrator | ok: [testbed-manager] =>  2026-02-05 01:49:17.527251 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 01:49:17.527262 | orchestrator | ok: [testbed-node-3] =>  2026-02-05 01:49:17.527273 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 01:49:17.527283 | orchestrator | ok: [testbed-node-4] =>  2026-02-05 01:49:17.527294 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 01:49:17.527305 | orchestrator | ok: [testbed-node-5] =>  2026-02-05 01:49:17.527316 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 01:49:17.527326 | orchestrator | ok: [testbed-node-0] =>  2026-02-05 01:49:17.527337 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 01:49:17.527348 | orchestrator | ok: [testbed-node-1] =>  2026-02-05 01:49:17.527359 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 01:49:17.527369 | orchestrator | ok: [testbed-node-2] =>  2026-02-05 01:49:17.527380 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 01:49:17.527391 | orchestrator | 2026-02-05 01:49:17.527402 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-05 01:49:17.527413 | orchestrator | Thursday 05 February 2026 01:49:12 +0000 (0:00:00.234) 0:04:59.101 ***** 2026-02-05 01:49:17.527424 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:49:17.527435 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:49:17.527446 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:49:17.527457 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:49:17.527493 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:49:17.527504 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:49:17.527515 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:49:17.527525 | orchestrator | 2026-02-05 01:49:17.527536 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-05 01:49:17.527547 | orchestrator | Thursday 05 February 2026 01:49:12 +0000 (0:00:00.228) 0:04:59.329 ***** 2026-02-05 01:49:17.527558 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:49:17.527568 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:49:17.527579 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:49:17.527589 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:49:17.527600 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:49:17.527611 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:49:17.527621 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:49:17.527632 | orchestrator | 2026-02-05 01:49:17.527642 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-05 01:49:17.527653 | orchestrator | Thursday 05 February 2026 01:49:12 +0000 (0:00:00.246) 0:04:59.576 ***** 2026-02-05 01:49:17.527666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:49:17.527680 | orchestrator | 2026-02-05 01:49:17.527690 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-05 01:49:17.527701 | orchestrator | Thursday 05 February 2026 01:49:13 +0000 (0:00:00.363) 0:04:59.939 ***** 2026-02-05 01:49:17.527712 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:17.527723 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:49:17.527733 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:49:17.527744 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:49:17.527755 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:49:17.527765 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:49:17.527776 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:49:17.527793 | orchestrator | 2026-02-05 01:49:17.527804 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-05 01:49:17.527815 | orchestrator | Thursday 05 February 2026 01:49:14 +0000 (0:00:00.905) 0:05:00.844 ***** 2026-02-05 01:49:17.527826 | orchestrator | ok: [testbed-manager] 2026-02-05 01:49:17.527836 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:49:17.527847 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:49:17.527857 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:49:17.527876 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:49:17.527886 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:49:17.527897 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:49:17.527907 | orchestrator | 2026-02-05 01:49:17.527918 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-05 01:49:17.527930 | orchestrator | Thursday 05 February 2026 01:49:17 +0000 (0:00:02.921) 0:05:03.766 ***** 2026-02-05 01:49:17.527941 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-05 01:49:17.527952 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-05 01:49:17.527962 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-05 01:49:17.527973 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-05 01:49:17.527983 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-05 01:49:17.527994 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-05 01:49:17.528011 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:49:17.528029 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-05 01:49:17.528048 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-05 01:49:17.528066 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-05 01:49:17.528083 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:49:17.528101 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-05 01:49:17.528118 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-05 01:49:17.528136 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-05 01:49:17.528153 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:49:17.528169 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-05 01:49:17.528200 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-05 01:50:21.754128 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-05 01:50:21.754202 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:50:21.754209 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-05 01:50:21.754214 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-05 01:50:21.754220 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-05 01:50:21.754226 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:50:21.754232 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:50:21.754239 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-05 01:50:21.754245 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-05 01:50:21.754251 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-05 01:50:21.754258 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:50:21.754265 | orchestrator | 2026-02-05 01:50:21.754273 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-05 01:50:21.754282 | orchestrator | Thursday 05 February 2026 01:49:17 +0000 (0:00:00.532) 0:05:04.299 ***** 2026-02-05 01:50:21.754288 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:21.754295 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:21.754301 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:21.754305 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:21.754309 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:21.754314 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:21.754318 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:21.754322 | orchestrator | 2026-02-05 01:50:21.754326 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-05 01:50:21.754346 | orchestrator | Thursday 05 February 2026 01:49:25 +0000 (0:00:07.706) 0:05:12.005 ***** 2026-02-05 01:50:21.754350 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:21.754354 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:21.754358 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:21.754362 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:21.754365 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:21.754369 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:21.754373 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:21.754376 | orchestrator | 2026-02-05 01:50:21.754380 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-05 01:50:21.754384 | orchestrator | Thursday 05 February 2026 01:49:26 +0000 (0:00:01.057) 0:05:13.063 ***** 2026-02-05 01:50:21.754388 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:21.754392 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:21.754395 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:21.754399 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:21.754403 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:21.754406 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:21.754410 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:21.754414 | orchestrator | 2026-02-05 01:50:21.754417 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-05 01:50:21.754421 | orchestrator | Thursday 05 February 2026 01:49:35 +0000 (0:00:08.790) 0:05:21.853 ***** 2026-02-05 01:50:21.754425 | orchestrator | changed: [testbed-manager] 2026-02-05 01:50:21.754428 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:21.754432 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:21.754436 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:21.754440 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:21.754443 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:21.754447 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:21.754451 | orchestrator | 2026-02-05 01:50:21.754454 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-05 01:50:21.754458 | orchestrator | Thursday 05 February 2026 01:49:38 +0000 (0:00:03.214) 0:05:25.068 ***** 2026-02-05 01:50:21.754462 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:21.754466 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:21.754469 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:21.754473 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:21.754477 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:21.754480 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:21.754484 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:21.754488 | orchestrator | 2026-02-05 01:50:21.754491 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-05 01:50:21.754495 | orchestrator | Thursday 05 February 2026 01:49:39 +0000 (0:00:01.239) 0:05:26.307 ***** 2026-02-05 01:50:21.754548 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:21.754554 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:21.754558 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:21.754562 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:21.754566 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:21.754576 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:21.754580 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:21.754584 | orchestrator | 2026-02-05 01:50:21.754588 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-05 01:50:21.754593 | orchestrator | Thursday 05 February 2026 01:49:41 +0000 (0:00:01.346) 0:05:27.654 ***** 2026-02-05 01:50:21.754597 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:50:21.754600 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:50:21.754604 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:50:21.754608 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:50:21.754612 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:50:21.754616 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:50:21.754624 | orchestrator | changed: [testbed-manager] 2026-02-05 01:50:21.754628 | orchestrator | 2026-02-05 01:50:21.754632 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-05 01:50:21.754636 | orchestrator | Thursday 05 February 2026 01:49:41 +0000 (0:00:00.540) 0:05:28.195 ***** 2026-02-05 01:50:21.754640 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:21.754643 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:21.754647 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:21.754651 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:21.754655 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:21.754659 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:21.754662 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:21.754666 | orchestrator | 2026-02-05 01:50:21.754671 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-05 01:50:21.754687 | orchestrator | Thursday 05 February 2026 01:49:51 +0000 (0:00:10.120) 0:05:38.316 ***** 2026-02-05 01:50:21.754691 | orchestrator | changed: [testbed-manager] 2026-02-05 01:50:21.754696 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:21.754700 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:21.754705 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:21.754709 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:21.754713 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:21.754718 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:21.754722 | orchestrator | 2026-02-05 01:50:21.754727 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-05 01:50:21.754731 | orchestrator | Thursday 05 February 2026 01:49:52 +0000 (0:00:00.866) 0:05:39.182 ***** 2026-02-05 01:50:21.754735 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:21.754740 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:21.754744 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:21.754748 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:21.754753 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:21.754757 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:21.754761 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:21.754766 | orchestrator | 2026-02-05 01:50:21.754770 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-05 01:50:21.754775 | orchestrator | Thursday 05 February 2026 01:50:03 +0000 (0:00:10.774) 0:05:49.957 ***** 2026-02-05 01:50:21.754779 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:21.754783 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:21.754788 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:21.754792 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:21.754796 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:21.754801 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:21.754806 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:21.754812 | orchestrator | 2026-02-05 01:50:21.754818 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-05 01:50:21.754824 | orchestrator | Thursday 05 February 2026 01:50:14 +0000 (0:00:10.957) 0:06:00.915 ***** 2026-02-05 01:50:21.754830 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-05 01:50:21.754835 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-05 01:50:21.754841 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-05 01:50:21.754847 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-05 01:50:21.754853 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-05 01:50:21.754859 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-05 01:50:21.754866 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-05 01:50:21.754873 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-05 01:50:21.754880 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-05 01:50:21.754886 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-05 01:50:21.754939 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-05 01:50:21.754950 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-05 01:50:21.754955 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-05 01:50:21.754959 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-05 01:50:21.754963 | orchestrator | 2026-02-05 01:50:21.754968 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-05 01:50:21.754973 | orchestrator | Thursday 05 February 2026 01:50:15 +0000 (0:00:01.289) 0:06:02.204 ***** 2026-02-05 01:50:21.754978 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:50:21.754982 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:50:21.754987 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:50:21.754991 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:50:21.754995 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:50:21.755000 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:50:21.755004 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:50:21.755008 | orchestrator | 2026-02-05 01:50:21.755013 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-05 01:50:21.755017 | orchestrator | Thursday 05 February 2026 01:50:16 +0000 (0:00:00.520) 0:06:02.725 ***** 2026-02-05 01:50:21.755021 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:21.755026 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:21.755030 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:21.755034 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:21.755039 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:21.755043 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:21.755050 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:21.755054 | orchestrator | 2026-02-05 01:50:21.755059 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-05 01:50:21.755065 | orchestrator | Thursday 05 February 2026 01:50:20 +0000 (0:00:04.658) 0:06:07.384 ***** 2026-02-05 01:50:21.755069 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:50:21.755074 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:50:21.755078 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:50:21.755082 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:50:21.755086 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:50:21.755091 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:50:21.755095 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:50:21.755100 | orchestrator | 2026-02-05 01:50:21.755105 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-05 01:50:21.755109 | orchestrator | Thursday 05 February 2026 01:50:21 +0000 (0:00:00.497) 0:06:07.882 ***** 2026-02-05 01:50:21.755114 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-05 01:50:21.755119 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-05 01:50:21.755123 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:50:21.755127 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-05 01:50:21.755131 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-05 01:50:21.755136 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:50:21.755140 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-05 01:50:21.755144 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-05 01:50:21.755148 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:50:21.755158 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-05 01:50:42.036696 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-05 01:50:42.036782 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:50:42.036791 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-05 01:50:42.036797 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-05 01:50:42.036803 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:50:42.036809 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-05 01:50:42.036834 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-05 01:50:42.036840 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:50:42.036846 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-05 01:50:42.036852 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-05 01:50:42.036857 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:50:42.036863 | orchestrator | 2026-02-05 01:50:42.036871 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-05 01:50:42.036877 | orchestrator | Thursday 05 February 2026 01:50:22 +0000 (0:00:00.734) 0:06:08.616 ***** 2026-02-05 01:50:42.036883 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:50:42.036888 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:50:42.036894 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:50:42.036899 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:50:42.036905 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:50:42.036910 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:50:42.036916 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:50:42.036921 | orchestrator | 2026-02-05 01:50:42.036927 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-05 01:50:42.036933 | orchestrator | Thursday 05 February 2026 01:50:22 +0000 (0:00:00.539) 0:06:09.156 ***** 2026-02-05 01:50:42.036939 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:50:42.036944 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:50:42.036949 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:50:42.036955 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:50:42.036960 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:50:42.036966 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:50:42.036971 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:50:42.036976 | orchestrator | 2026-02-05 01:50:42.036982 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-05 01:50:42.036988 | orchestrator | Thursday 05 February 2026 01:50:23 +0000 (0:00:00.505) 0:06:09.661 ***** 2026-02-05 01:50:42.036995 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:50:42.037004 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:50:42.037012 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:50:42.037021 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:50:42.037034 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:50:42.037044 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:50:42.037053 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:50:42.037062 | orchestrator | 2026-02-05 01:50:42.037070 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-05 01:50:42.037079 | orchestrator | Thursday 05 February 2026 01:50:23 +0000 (0:00:00.497) 0:06:10.159 ***** 2026-02-05 01:50:42.037086 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:42.037094 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:50:42.037103 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:50:42.037111 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:50:42.037120 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:50:42.037129 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:50:42.037138 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:50:42.037147 | orchestrator | 2026-02-05 01:50:42.037156 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-05 01:50:42.037164 | orchestrator | Thursday 05 February 2026 01:50:25 +0000 (0:00:02.141) 0:06:12.300 ***** 2026-02-05 01:50:42.037175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:50:42.037184 | orchestrator | 2026-02-05 01:50:42.037189 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-05 01:50:42.037195 | orchestrator | Thursday 05 February 2026 01:50:26 +0000 (0:00:00.880) 0:06:13.181 ***** 2026-02-05 01:50:42.037201 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:42.037218 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:42.037224 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:42.037230 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:42.037237 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:42.037244 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:42.037251 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:42.037257 | orchestrator | 2026-02-05 01:50:42.037264 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-05 01:50:42.037270 | orchestrator | Thursday 05 February 2026 01:50:27 +0000 (0:00:00.845) 0:06:14.026 ***** 2026-02-05 01:50:42.037277 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:42.037283 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:42.037290 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:42.037297 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:42.037303 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:42.037309 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:42.037315 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:42.037322 | orchestrator | 2026-02-05 01:50:42.037328 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-05 01:50:42.037335 | orchestrator | Thursday 05 February 2026 01:50:28 +0000 (0:00:00.894) 0:06:14.921 ***** 2026-02-05 01:50:42.037341 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:42.037348 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:42.037354 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:42.037360 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:42.037367 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:42.037373 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:42.037380 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:42.037386 | orchestrator | 2026-02-05 01:50:42.037392 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-05 01:50:42.037414 | orchestrator | Thursday 05 February 2026 01:50:30 +0000 (0:00:01.670) 0:06:16.592 ***** 2026-02-05 01:50:42.037421 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:50:42.037427 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:50:42.037434 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:50:42.037440 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:50:42.037446 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:50:42.037452 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:50:42.037458 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:50:42.037465 | orchestrator | 2026-02-05 01:50:42.037471 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-05 01:50:42.037478 | orchestrator | Thursday 05 February 2026 01:50:31 +0000 (0:00:01.497) 0:06:18.089 ***** 2026-02-05 01:50:42.037484 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:42.037490 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:42.037497 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:42.037503 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:42.037534 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:42.037541 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:42.037547 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:42.037554 | orchestrator | 2026-02-05 01:50:42.037562 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-05 01:50:42.037569 | orchestrator | Thursday 05 February 2026 01:50:32 +0000 (0:00:01.381) 0:06:19.471 ***** 2026-02-05 01:50:42.037576 | orchestrator | changed: [testbed-manager] 2026-02-05 01:50:42.037583 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:50:42.037591 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:50:42.037598 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:50:42.037605 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:50:42.037612 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:50:42.037618 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:50:42.037624 | orchestrator | 2026-02-05 01:50:42.037631 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-05 01:50:42.037637 | orchestrator | Thursday 05 February 2026 01:50:34 +0000 (0:00:01.621) 0:06:21.092 ***** 2026-02-05 01:50:42.037647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:50:42.037654 | orchestrator | 2026-02-05 01:50:42.037660 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-05 01:50:42.037667 | orchestrator | Thursday 05 February 2026 01:50:35 +0000 (0:00:01.013) 0:06:22.106 ***** 2026-02-05 01:50:42.037673 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:42.037679 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:50:42.037685 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:50:42.037691 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:50:42.037697 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:50:42.037704 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:50:42.037710 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:50:42.037716 | orchestrator | 2026-02-05 01:50:42.037722 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-05 01:50:42.037729 | orchestrator | Thursday 05 February 2026 01:50:37 +0000 (0:00:01.530) 0:06:23.637 ***** 2026-02-05 01:50:42.037735 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:42.037741 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:50:42.037747 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:50:42.037753 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:50:42.037759 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:50:42.037766 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:50:42.037772 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:50:42.037778 | orchestrator | 2026-02-05 01:50:42.037784 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-05 01:50:42.037790 | orchestrator | Thursday 05 February 2026 01:50:38 +0000 (0:00:01.201) 0:06:24.839 ***** 2026-02-05 01:50:42.037797 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:42.037803 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:50:42.037809 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:50:42.037815 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:50:42.037821 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:50:42.037827 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:50:42.037833 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:50:42.037840 | orchestrator | 2026-02-05 01:50:42.037846 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-05 01:50:42.037852 | orchestrator | Thursday 05 February 2026 01:50:39 +0000 (0:00:01.177) 0:06:26.016 ***** 2026-02-05 01:50:42.037870 | orchestrator | ok: [testbed-manager] 2026-02-05 01:50:42.037877 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:50:42.037883 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:50:42.037889 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:50:42.037895 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:50:42.037901 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:50:42.037907 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:50:42.037914 | orchestrator | 2026-02-05 01:50:42.037920 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-05 01:50:42.037926 | orchestrator | Thursday 05 February 2026 01:50:40 +0000 (0:00:01.453) 0:06:27.469 ***** 2026-02-05 01:50:42.037932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:50:42.037939 | orchestrator | 2026-02-05 01:50:42.037945 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 01:50:42.037951 | orchestrator | Thursday 05 February 2026 01:50:41 +0000 (0:00:00.851) 0:06:28.321 ***** 2026-02-05 01:50:42.037957 | orchestrator | 2026-02-05 01:50:42.037964 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 01:50:42.037970 | orchestrator | Thursday 05 February 2026 01:50:41 +0000 (0:00:00.039) 0:06:28.360 ***** 2026-02-05 01:50:42.037980 | orchestrator | 2026-02-05 01:50:42.037987 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 01:50:42.037993 | orchestrator | Thursday 05 February 2026 01:50:41 +0000 (0:00:00.038) 0:06:28.399 ***** 2026-02-05 01:50:42.037999 | orchestrator | 2026-02-05 01:50:42.038006 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 01:50:42.038061 | orchestrator | Thursday 05 February 2026 01:50:41 +0000 (0:00:00.046) 0:06:28.446 ***** 2026-02-05 01:51:09.854922 | orchestrator | 2026-02-05 01:51:09.855063 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 01:51:09.855091 | orchestrator | Thursday 05 February 2026 01:50:41 +0000 (0:00:00.039) 0:06:28.485 ***** 2026-02-05 01:51:09.855112 | orchestrator | 2026-02-05 01:51:09.855132 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 01:51:09.855148 | orchestrator | Thursday 05 February 2026 01:50:41 +0000 (0:00:00.037) 0:06:28.523 ***** 2026-02-05 01:51:09.855160 | orchestrator | 2026-02-05 01:51:09.855171 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 01:51:09.855182 | orchestrator | Thursday 05 February 2026 01:50:41 +0000 (0:00:00.044) 0:06:28.568 ***** 2026-02-05 01:51:09.855193 | orchestrator | 2026-02-05 01:51:09.855204 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-05 01:51:09.855216 | orchestrator | Thursday 05 February 2026 01:50:42 +0000 (0:00:00.038) 0:06:28.606 ***** 2026-02-05 01:51:09.855227 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:09.855239 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:09.855250 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:09.855261 | orchestrator | 2026-02-05 01:51:09.855272 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-05 01:51:09.855283 | orchestrator | Thursday 05 February 2026 01:50:43 +0000 (0:00:01.398) 0:06:30.005 ***** 2026-02-05 01:51:09.855294 | orchestrator | changed: [testbed-manager] 2026-02-05 01:51:09.855306 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:51:09.855317 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:51:09.855328 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:51:09.855339 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:51:09.855350 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:51:09.855361 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:51:09.855372 | orchestrator | 2026-02-05 01:51:09.855383 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-05 01:51:09.855394 | orchestrator | Thursday 05 February 2026 01:50:45 +0000 (0:00:01.593) 0:06:31.598 ***** 2026-02-05 01:51:09.855411 | orchestrator | changed: [testbed-manager] 2026-02-05 01:51:09.855429 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:51:09.855449 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:51:09.855469 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:51:09.855488 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:51:09.855501 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:51:09.855543 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:51:09.855558 | orchestrator | 2026-02-05 01:51:09.855573 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-05 01:51:09.855593 | orchestrator | Thursday 05 February 2026 01:50:46 +0000 (0:00:01.196) 0:06:32.795 ***** 2026-02-05 01:51:09.855613 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:51:09.855633 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:51:09.855652 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:51:09.855672 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:51:09.855692 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:51:09.855710 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:51:09.855728 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:51:09.855747 | orchestrator | 2026-02-05 01:51:09.855766 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-05 01:51:09.855784 | orchestrator | Thursday 05 February 2026 01:50:48 +0000 (0:00:02.172) 0:06:34.967 ***** 2026-02-05 01:51:09.855802 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:51:09.855854 | orchestrator | 2026-02-05 01:51:09.855875 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-05 01:51:09.855895 | orchestrator | Thursday 05 February 2026 01:50:48 +0000 (0:00:00.112) 0:06:35.080 ***** 2026-02-05 01:51:09.855913 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:09.855932 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:51:09.855949 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:51:09.855969 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:51:09.855987 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:51:09.856005 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:51:09.856023 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:51:09.856042 | orchestrator | 2026-02-05 01:51:09.856060 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-05 01:51:09.856079 | orchestrator | Thursday 05 February 2026 01:50:49 +0000 (0:00:01.154) 0:06:36.235 ***** 2026-02-05 01:51:09.856098 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:51:09.856160 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:51:09.856182 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:51:09.856201 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:51:09.856219 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:51:09.856238 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:51:09.856257 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:51:09.856276 | orchestrator | 2026-02-05 01:51:09.856294 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-05 01:51:09.856312 | orchestrator | Thursday 05 February 2026 01:50:50 +0000 (0:00:00.566) 0:06:36.801 ***** 2026-02-05 01:51:09.856332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:51:09.856355 | orchestrator | 2026-02-05 01:51:09.856373 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-05 01:51:09.856391 | orchestrator | Thursday 05 February 2026 01:50:51 +0000 (0:00:01.094) 0:06:37.896 ***** 2026-02-05 01:51:09.856411 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:09.856429 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:51:09.856449 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:51:09.856468 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:51:09.856485 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:09.856504 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:09.856619 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:09.856641 | orchestrator | 2026-02-05 01:51:09.856659 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-05 01:51:09.856680 | orchestrator | Thursday 05 February 2026 01:50:52 +0000 (0:00:01.001) 0:06:38.897 ***** 2026-02-05 01:51:09.856699 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-05 01:51:09.856746 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-05 01:51:09.856766 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-05 01:51:09.856785 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-05 01:51:09.856802 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-05 01:51:09.856822 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-05 01:51:09.856840 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-05 01:51:09.856858 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-05 01:51:09.856878 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-05 01:51:09.856898 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-05 01:51:09.856917 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-05 01:51:09.856936 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-05 01:51:09.856956 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-05 01:51:09.856990 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-05 01:51:09.857011 | orchestrator | 2026-02-05 01:51:09.857031 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-05 01:51:09.857050 | orchestrator | Thursday 05 February 2026 01:50:55 +0000 (0:00:02.770) 0:06:41.668 ***** 2026-02-05 01:51:09.857068 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:51:09.857085 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:51:09.857096 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:51:09.857107 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:51:09.857117 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:51:09.857128 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:51:09.857139 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:51:09.857205 | orchestrator | 2026-02-05 01:51:09.857218 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-05 01:51:09.857229 | orchestrator | Thursday 05 February 2026 01:50:55 +0000 (0:00:00.706) 0:06:42.375 ***** 2026-02-05 01:51:09.857269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:51:09.857283 | orchestrator | 2026-02-05 01:51:09.857294 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-05 01:51:09.857305 | orchestrator | Thursday 05 February 2026 01:50:56 +0000 (0:00:00.811) 0:06:43.187 ***** 2026-02-05 01:51:09.857316 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:09.857326 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:51:09.857337 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:51:09.857347 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:09.857358 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:51:09.857369 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:09.857379 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:09.857390 | orchestrator | 2026-02-05 01:51:09.857401 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-05 01:51:09.857412 | orchestrator | Thursday 05 February 2026 01:50:57 +0000 (0:00:00.913) 0:06:44.100 ***** 2026-02-05 01:51:09.857422 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:09.857433 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:51:09.857444 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:51:09.857454 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:51:09.857465 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:09.857475 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:09.857486 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:09.857496 | orchestrator | 2026-02-05 01:51:09.857507 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-05 01:51:09.857666 | orchestrator | Thursday 05 February 2026 01:50:58 +0000 (0:00:01.084) 0:06:45.185 ***** 2026-02-05 01:51:09.857693 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:51:09.857705 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:51:09.857715 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:51:09.857726 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:51:09.857737 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:51:09.857747 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:51:09.857758 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:51:09.857769 | orchestrator | 2026-02-05 01:51:09.857781 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-05 01:51:09.857792 | orchestrator | Thursday 05 February 2026 01:50:59 +0000 (0:00:00.506) 0:06:45.691 ***** 2026-02-05 01:51:09.857802 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:09.857814 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:51:09.857824 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:51:09.857835 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:51:09.857846 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:09.857857 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:09.857867 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:09.857891 | orchestrator | 2026-02-05 01:51:09.857902 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-05 01:51:09.857913 | orchestrator | Thursday 05 February 2026 01:51:00 +0000 (0:00:01.672) 0:06:47.364 ***** 2026-02-05 01:51:09.857924 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:51:09.857935 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:51:09.857946 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:51:09.857956 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:51:09.857983 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:51:09.858007 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:51:09.858095 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:51:09.858106 | orchestrator | 2026-02-05 01:51:09.858116 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-05 01:51:09.858133 | orchestrator | Thursday 05 February 2026 01:51:01 +0000 (0:00:00.500) 0:06:47.864 ***** 2026-02-05 01:51:09.858150 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:09.858166 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:51:09.858175 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:51:09.858185 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:51:09.858195 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:51:09.858204 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:51:09.858230 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:51:42.011403 | orchestrator | 2026-02-05 01:51:42.011542 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-05 01:51:42.011596 | orchestrator | Thursday 05 February 2026 01:51:09 +0000 (0:00:08.562) 0:06:56.427 ***** 2026-02-05 01:51:42.011615 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:42.011627 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:51:42.011637 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:51:42.011647 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:51:42.011657 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:51:42.011667 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:51:42.011677 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:51:42.011687 | orchestrator | 2026-02-05 01:51:42.011697 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-05 01:51:42.011707 | orchestrator | Thursday 05 February 2026 01:51:11 +0000 (0:00:01.619) 0:06:58.047 ***** 2026-02-05 01:51:42.011717 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:42.011727 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:51:42.011737 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:51:42.011747 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:51:42.011814 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:51:42.011825 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:51:42.011835 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:51:42.011845 | orchestrator | 2026-02-05 01:51:42.011855 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-05 01:51:42.011865 | orchestrator | Thursday 05 February 2026 01:51:13 +0000 (0:00:01.809) 0:06:59.856 ***** 2026-02-05 01:51:42.011875 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:42.011886 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:51:42.011897 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:51:42.011909 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:51:42.011920 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:51:42.011932 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:51:42.011943 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:51:42.011954 | orchestrator | 2026-02-05 01:51:42.011966 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-05 01:51:42.011978 | orchestrator | Thursday 05 February 2026 01:51:15 +0000 (0:00:01.770) 0:07:01.627 ***** 2026-02-05 01:51:42.011989 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:42.012001 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:51:42.012012 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:51:42.012024 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:51:42.012036 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:42.012072 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:42.012083 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:42.012092 | orchestrator | 2026-02-05 01:51:42.012105 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-05 01:51:42.012122 | orchestrator | Thursday 05 February 2026 01:51:15 +0000 (0:00:00.871) 0:07:02.498 ***** 2026-02-05 01:51:42.012138 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:51:42.012162 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:51:42.012180 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:51:42.012196 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:51:42.012212 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:51:42.012228 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:51:42.012243 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:51:42.012261 | orchestrator | 2026-02-05 01:51:42.012278 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-05 01:51:42.012294 | orchestrator | Thursday 05 February 2026 01:51:16 +0000 (0:00:01.074) 0:07:03.573 ***** 2026-02-05 01:51:42.012311 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:51:42.012328 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:51:42.012389 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:51:42.012410 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:51:42.012427 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:51:42.012439 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:51:42.012449 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:51:42.012458 | orchestrator | 2026-02-05 01:51:42.012468 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-05 01:51:42.012497 | orchestrator | Thursday 05 February 2026 01:51:17 +0000 (0:00:00.590) 0:07:04.163 ***** 2026-02-05 01:51:42.012509 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:42.012563 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:51:42.012579 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:51:42.012594 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:51:42.012610 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:42.012632 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:42.012648 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:42.012664 | orchestrator | 2026-02-05 01:51:42.012680 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-05 01:51:42.012697 | orchestrator | Thursday 05 February 2026 01:51:18 +0000 (0:00:00.524) 0:07:04.688 ***** 2026-02-05 01:51:42.012713 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:42.012730 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:51:42.012746 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:51:42.012764 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:51:42.012774 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:42.012784 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:42.012793 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:42.012803 | orchestrator | 2026-02-05 01:51:42.012813 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-05 01:51:42.012823 | orchestrator | Thursday 05 February 2026 01:51:18 +0000 (0:00:00.540) 0:07:05.228 ***** 2026-02-05 01:51:42.012838 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:42.012859 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:51:42.012880 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:51:42.012897 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:51:42.012912 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:42.012927 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:42.012942 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:42.012957 | orchestrator | 2026-02-05 01:51:42.012974 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-05 01:51:42.012989 | orchestrator | Thursday 05 February 2026 01:51:19 +0000 (0:00:00.668) 0:07:05.897 ***** 2026-02-05 01:51:42.013004 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:51:42.013021 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:51:42.013038 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:42.013055 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:42.013088 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:51:42.013100 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:42.013110 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:42.013119 | orchestrator | 2026-02-05 01:51:42.013166 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-05 01:51:42.013189 | orchestrator | Thursday 05 February 2026 01:51:23 +0000 (0:00:04.443) 0:07:10.340 ***** 2026-02-05 01:51:42.013206 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:51:42.013222 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:51:42.013240 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:51:42.013256 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:51:42.013273 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:51:42.013289 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:51:42.013304 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:51:42.013319 | orchestrator | 2026-02-05 01:51:42.013334 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-05 01:51:42.013351 | orchestrator | Thursday 05 February 2026 01:51:24 +0000 (0:00:00.509) 0:07:10.849 ***** 2026-02-05 01:51:42.013369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:51:42.013388 | orchestrator | 2026-02-05 01:51:42.013404 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-05 01:51:42.013448 | orchestrator | Thursday 05 February 2026 01:51:25 +0000 (0:00:00.941) 0:07:11.791 ***** 2026-02-05 01:51:42.013464 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:42.013481 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:51:42.013497 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:51:42.013514 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:51:42.013554 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:42.013570 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:42.013587 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:42.013602 | orchestrator | 2026-02-05 01:51:42.013612 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-05 01:51:42.013622 | orchestrator | Thursday 05 February 2026 01:51:27 +0000 (0:00:02.243) 0:07:14.035 ***** 2026-02-05 01:51:42.013636 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:42.013652 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:51:42.013668 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:51:42.013684 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:51:42.013700 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:42.013716 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:42.013732 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:42.013749 | orchestrator | 2026-02-05 01:51:42.013767 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-05 01:51:42.013783 | orchestrator | Thursday 05 February 2026 01:51:28 +0000 (0:00:01.176) 0:07:15.211 ***** 2026-02-05 01:51:42.013800 | orchestrator | ok: [testbed-manager] 2026-02-05 01:51:42.013816 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:51:42.013832 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:51:42.013842 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:51:42.013851 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:51:42.013861 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:51:42.013871 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:51:42.013880 | orchestrator | 2026-02-05 01:51:42.013890 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-05 01:51:42.013900 | orchestrator | Thursday 05 February 2026 01:51:29 +0000 (0:00:00.895) 0:07:16.107 ***** 2026-02-05 01:51:42.013910 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 01:51:42.013921 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 01:51:42.013947 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 01:51:42.013972 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 01:51:42.014000 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 01:51:42.014088 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 01:51:42.014113 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 01:51:42.014130 | orchestrator | 2026-02-05 01:51:42.014148 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-05 01:51:42.014165 | orchestrator | Thursday 05 February 2026 01:51:31 +0000 (0:00:01.936) 0:07:18.043 ***** 2026-02-05 01:51:42.014183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:51:42.014201 | orchestrator | 2026-02-05 01:51:42.014217 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-05 01:51:42.014234 | orchestrator | Thursday 05 February 2026 01:51:32 +0000 (0:00:00.767) 0:07:18.811 ***** 2026-02-05 01:51:42.014252 | orchestrator | changed: [testbed-manager] 2026-02-05 01:51:42.014269 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:51:42.014283 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:51:42.014293 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:51:42.014303 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:51:42.014313 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:51:42.014330 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:51:42.014345 | orchestrator | 2026-02-05 01:51:42.014380 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-05 01:52:13.375726 | orchestrator | Thursday 05 February 2026 01:51:41 +0000 (0:00:09.775) 0:07:28.586 ***** 2026-02-05 01:52:13.375806 | orchestrator | ok: [testbed-manager] 2026-02-05 01:52:13.375814 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:52:13.375818 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:52:13.375823 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:52:13.375827 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:52:13.375831 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:52:13.375835 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:52:13.375839 | orchestrator | 2026-02-05 01:52:13.375844 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-05 01:52:13.375848 | orchestrator | Thursday 05 February 2026 01:51:43 +0000 (0:00:01.899) 0:07:30.486 ***** 2026-02-05 01:52:13.375852 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:52:13.375856 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:52:13.375860 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:52:13.375864 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:52:13.375868 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:52:13.375872 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:52:13.375876 | orchestrator | 2026-02-05 01:52:13.375880 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-05 01:52:13.375884 | orchestrator | Thursday 05 February 2026 01:51:45 +0000 (0:00:01.249) 0:07:31.736 ***** 2026-02-05 01:52:13.375888 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:13.375893 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:13.375897 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:13.375901 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:13.375905 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:13.375909 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:13.375913 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:13.375932 | orchestrator | 2026-02-05 01:52:13.375936 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-05 01:52:13.375940 | orchestrator | 2026-02-05 01:52:13.375944 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-05 01:52:13.375948 | orchestrator | Thursday 05 February 2026 01:51:46 +0000 (0:00:01.109) 0:07:32.845 ***** 2026-02-05 01:52:13.375952 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:52:13.375955 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:52:13.375959 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:52:13.375963 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:52:13.375967 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:52:13.375971 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:52:13.375974 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:52:13.375978 | orchestrator | 2026-02-05 01:52:13.375982 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-05 01:52:13.375986 | orchestrator | 2026-02-05 01:52:13.375990 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-05 01:52:13.375994 | orchestrator | Thursday 05 February 2026 01:51:46 +0000 (0:00:00.562) 0:07:33.407 ***** 2026-02-05 01:52:13.375998 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:13.376002 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:13.376006 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:13.376010 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:13.376013 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:13.376017 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:13.376021 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:13.376025 | orchestrator | 2026-02-05 01:52:13.376029 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-05 01:52:13.376033 | orchestrator | Thursday 05 February 2026 01:51:48 +0000 (0:00:01.293) 0:07:34.701 ***** 2026-02-05 01:52:13.376037 | orchestrator | ok: [testbed-manager] 2026-02-05 01:52:13.376040 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:52:13.376044 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:52:13.376048 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:52:13.376052 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:52:13.376056 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:52:13.376060 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:52:13.376063 | orchestrator | 2026-02-05 01:52:13.376067 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-05 01:52:13.376071 | orchestrator | Thursday 05 February 2026 01:51:49 +0000 (0:00:01.338) 0:07:36.039 ***** 2026-02-05 01:52:13.376075 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:52:13.376079 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:52:13.376083 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:52:13.376086 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:52:13.376090 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:52:13.376103 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:52:13.376107 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:52:13.376111 | orchestrator | 2026-02-05 01:52:13.376115 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-05 01:52:13.376119 | orchestrator | Thursday 05 February 2026 01:51:49 +0000 (0:00:00.502) 0:07:36.542 ***** 2026-02-05 01:52:13.376123 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:52:13.376129 | orchestrator | 2026-02-05 01:52:13.376133 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-05 01:52:13.376137 | orchestrator | Thursday 05 February 2026 01:51:51 +0000 (0:00:01.056) 0:07:37.599 ***** 2026-02-05 01:52:13.376143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:52:13.376152 | orchestrator | 2026-02-05 01:52:13.376157 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-05 01:52:13.376160 | orchestrator | Thursday 05 February 2026 01:51:51 +0000 (0:00:00.801) 0:07:38.400 ***** 2026-02-05 01:52:13.376164 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:13.376168 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:13.376172 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:13.376176 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:13.376180 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:13.376184 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:13.376187 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:13.376191 | orchestrator | 2026-02-05 01:52:13.376208 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-05 01:52:13.376212 | orchestrator | Thursday 05 February 2026 01:52:00 +0000 (0:00:08.990) 0:07:47.391 ***** 2026-02-05 01:52:13.376216 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:13.376220 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:13.376223 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:13.376227 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:13.376231 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:13.376235 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:13.376238 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:13.376242 | orchestrator | 2026-02-05 01:52:13.376246 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-05 01:52:13.376250 | orchestrator | Thursday 05 February 2026 01:52:01 +0000 (0:00:01.056) 0:07:48.448 ***** 2026-02-05 01:52:13.376254 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:13.376258 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:13.376261 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:13.376265 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:13.376269 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:13.376273 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:13.376277 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:13.376281 | orchestrator | 2026-02-05 01:52:13.376286 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-05 01:52:13.376290 | orchestrator | Thursday 05 February 2026 01:52:03 +0000 (0:00:01.374) 0:07:49.822 ***** 2026-02-05 01:52:13.376295 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:13.376299 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:13.376304 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:13.376308 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:13.376313 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:13.376317 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:13.376321 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:13.376326 | orchestrator | 2026-02-05 01:52:13.376331 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-05 01:52:13.376335 | orchestrator | Thursday 05 February 2026 01:52:06 +0000 (0:00:02.933) 0:07:52.756 ***** 2026-02-05 01:52:13.376339 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:13.376343 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:13.376348 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:13.376352 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:13.376357 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:13.376361 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:13.376366 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:13.376370 | orchestrator | 2026-02-05 01:52:13.376375 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-05 01:52:13.376379 | orchestrator | Thursday 05 February 2026 01:52:07 +0000 (0:00:01.171) 0:07:53.927 ***** 2026-02-05 01:52:13.376383 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:13.376388 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:13.376392 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:13.376396 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:13.376404 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:13.376408 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:13.376413 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:13.376417 | orchestrator | 2026-02-05 01:52:13.376421 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-05 01:52:13.376426 | orchestrator | 2026-02-05 01:52:13.376430 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-05 01:52:13.376435 | orchestrator | Thursday 05 February 2026 01:52:08 +0000 (0:00:01.174) 0:07:55.102 ***** 2026-02-05 01:52:13.376440 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:52:13.376444 | orchestrator | 2026-02-05 01:52:13.376449 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-05 01:52:13.376453 | orchestrator | Thursday 05 February 2026 01:52:09 +0000 (0:00:00.790) 0:07:55.893 ***** 2026-02-05 01:52:13.376458 | orchestrator | ok: [testbed-manager] 2026-02-05 01:52:13.376462 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:52:13.376467 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:52:13.376471 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:52:13.376476 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:52:13.376483 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:52:13.376488 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:52:13.376492 | orchestrator | 2026-02-05 01:52:13.376496 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-05 01:52:13.376501 | orchestrator | Thursday 05 February 2026 01:52:10 +0000 (0:00:01.128) 0:07:57.022 ***** 2026-02-05 01:52:13.376506 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:13.376510 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:13.376515 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:13.376519 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:13.376568 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:13.376571 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:13.376575 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:13.376579 | orchestrator | 2026-02-05 01:52:13.376583 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-05 01:52:13.376587 | orchestrator | Thursday 05 February 2026 01:52:11 +0000 (0:00:01.151) 0:07:58.173 ***** 2026-02-05 01:52:13.376591 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:52:13.376595 | orchestrator | 2026-02-05 01:52:13.376598 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-05 01:52:13.376602 | orchestrator | Thursday 05 February 2026 01:52:12 +0000 (0:00:00.782) 0:07:58.956 ***** 2026-02-05 01:52:13.376606 | orchestrator | ok: [testbed-manager] 2026-02-05 01:52:13.376610 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:52:13.376614 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:52:13.376618 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:52:13.376622 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:52:13.376626 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:52:13.376629 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:52:13.376633 | orchestrator | 2026-02-05 01:52:13.376640 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-05 01:52:14.864296 | orchestrator | Thursday 05 February 2026 01:52:13 +0000 (0:00:00.995) 0:07:59.951 ***** 2026-02-05 01:52:14.864386 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:14.864398 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:14.864405 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:14.864413 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:14.864420 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:14.864427 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:14.864435 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:14.864442 | orchestrator | 2026-02-05 01:52:14.864451 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:52:14.864481 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-05 01:52:14.864490 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-05 01:52:14.864498 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-05 01:52:14.864505 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-05 01:52:14.864513 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-05 01:52:14.864547 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-05 01:52:14.864556 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-05 01:52:14.864564 | orchestrator | 2026-02-05 01:52:14.864571 | orchestrator | 2026-02-05 01:52:14.864578 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:52:14.864586 | orchestrator | Thursday 05 February 2026 01:52:14 +0000 (0:00:01.087) 0:08:01.039 ***** 2026-02-05 01:52:14.864593 | orchestrator | =============================================================================== 2026-02-05 01:52:14.864600 | orchestrator | osism.commons.packages : Install required packages --------------------- 73.67s 2026-02-05 01:52:14.864608 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.69s 2026-02-05 01:52:14.864615 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 30.66s 2026-02-05 01:52:14.864622 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.80s 2026-02-05 01:52:14.864629 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.96s 2026-02-05 01:52:14.864636 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.84s 2026-02-05 01:52:14.864643 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.77s 2026-02-05 01:52:14.864651 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.12s 2026-02-05 01:52:14.864658 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.78s 2026-02-05 01:52:14.864665 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.30s 2026-02-05 01:52:14.864672 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.99s 2026-02-05 01:52:14.864680 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.79s 2026-02-05 01:52:14.864687 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.56s 2026-02-05 01:52:14.864706 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.42s 2026-02-05 01:52:14.864713 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.09s 2026-02-05 01:52:14.864721 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.71s 2026-02-05 01:52:14.864728 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.66s 2026-02-05 01:52:14.864735 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.34s 2026-02-05 01:52:14.864742 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.61s 2026-02-05 01:52:14.864759 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.29s 2026-02-05 01:52:15.128282 | orchestrator | + osism apply fail2ban 2026-02-05 01:52:27.241053 | orchestrator | 2026-02-05 01:52:27 | INFO  | Task 383d1f57-95e6-4797-bbae-1bbba0e9cc22 (fail2ban) was prepared for execution. 2026-02-05 01:52:27.241192 | orchestrator | 2026-02-05 01:52:27 | INFO  | It takes a moment until task 383d1f57-95e6-4797-bbae-1bbba0e9cc22 (fail2ban) has been started and output is visible here. 2026-02-05 01:52:48.855574 | orchestrator | 2026-02-05 01:52:48.855756 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-05 01:52:48.855778 | orchestrator | 2026-02-05 01:52:48.855791 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-05 01:52:48.855803 | orchestrator | Thursday 05 February 2026 01:52:31 +0000 (0:00:00.244) 0:00:00.244 ***** 2026-02-05 01:52:48.855816 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:52:48.855830 | orchestrator | 2026-02-05 01:52:48.855842 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-05 01:52:48.855853 | orchestrator | Thursday 05 February 2026 01:52:32 +0000 (0:00:01.135) 0:00:01.380 ***** 2026-02-05 01:52:48.855864 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:48.855876 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:48.855887 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:48.855898 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:48.855909 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:48.855919 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:48.855930 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:48.855941 | orchestrator | 2026-02-05 01:52:48.855952 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-05 01:52:48.855964 | orchestrator | Thursday 05 February 2026 01:52:43 +0000 (0:00:11.320) 0:00:12.701 ***** 2026-02-05 01:52:48.855975 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:48.855986 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:48.855996 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:48.856007 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:48.856018 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:48.856028 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:48.856039 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:48.856050 | orchestrator | 2026-02-05 01:52:48.856061 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-05 01:52:48.856072 | orchestrator | Thursday 05 February 2026 01:52:45 +0000 (0:00:01.452) 0:00:14.153 ***** 2026-02-05 01:52:48.856085 | orchestrator | ok: [testbed-manager] 2026-02-05 01:52:48.856098 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:52:48.856111 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:52:48.856123 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:52:48.856136 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:52:48.856149 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:52:48.856162 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:52:48.856174 | orchestrator | 2026-02-05 01:52:48.856187 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-05 01:52:48.856199 | orchestrator | Thursday 05 February 2026 01:52:46 +0000 (0:00:01.495) 0:00:15.648 ***** 2026-02-05 01:52:48.856212 | orchestrator | changed: [testbed-manager] 2026-02-05 01:52:48.856225 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:52:48.856238 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:52:48.856250 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:52:48.856263 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:52:48.856275 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:52:48.856288 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:52:48.856301 | orchestrator | 2026-02-05 01:52:48.856313 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:52:48.856326 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:52:48.856366 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:52:48.856381 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:52:48.856393 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:52:48.856406 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:52:48.856418 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:52:48.856431 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:52:48.856445 | orchestrator | 2026-02-05 01:52:48.856456 | orchestrator | 2026-02-05 01:52:48.856467 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:52:48.856477 | orchestrator | Thursday 05 February 2026 01:52:48 +0000 (0:00:01.652) 0:00:17.301 ***** 2026-02-05 01:52:48.856488 | orchestrator | =============================================================================== 2026-02-05 01:52:48.856499 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.32s 2026-02-05 01:52:48.856510 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.65s 2026-02-05 01:52:48.856521 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.50s 2026-02-05 01:52:48.856561 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.45s 2026-02-05 01:52:48.856572 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.14s 2026-02-05 01:52:49.132856 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-05 01:52:49.132928 | orchestrator | + osism apply network 2026-02-05 01:53:01.176170 | orchestrator | 2026-02-05 01:53:01 | INFO  | Task 0dfe6255-cb1e-45e1-9387-a2dfd1e1f21e (network) was prepared for execution. 2026-02-05 01:53:01.176281 | orchestrator | 2026-02-05 01:53:01 | INFO  | It takes a moment until task 0dfe6255-cb1e-45e1-9387-a2dfd1e1f21e (network) has been started and output is visible here. 2026-02-05 01:53:30.240866 | orchestrator | 2026-02-05 01:53:30.240979 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-05 01:53:30.240997 | orchestrator | 2026-02-05 01:53:30.241009 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-05 01:53:30.241020 | orchestrator | Thursday 05 February 2026 01:53:05 +0000 (0:00:00.262) 0:00:00.262 ***** 2026-02-05 01:53:30.241030 | orchestrator | ok: [testbed-manager] 2026-02-05 01:53:30.241043 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:53:30.241053 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:53:30.241063 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:53:30.241073 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:53:30.241083 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:53:30.241094 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:53:30.241104 | orchestrator | 2026-02-05 01:53:30.241115 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-05 01:53:30.241125 | orchestrator | Thursday 05 February 2026 01:53:06 +0000 (0:00:00.701) 0:00:00.964 ***** 2026-02-05 01:53:30.241138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:53:30.241150 | orchestrator | 2026-02-05 01:53:30.241160 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-05 01:53:30.241170 | orchestrator | Thursday 05 February 2026 01:53:07 +0000 (0:00:01.151) 0:00:02.115 ***** 2026-02-05 01:53:30.241209 | orchestrator | ok: [testbed-manager] 2026-02-05 01:53:30.241220 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:53:30.241230 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:53:30.241240 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:53:30.241250 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:53:30.241259 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:53:30.241269 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:53:30.241279 | orchestrator | 2026-02-05 01:53:30.241289 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-05 01:53:30.241299 | orchestrator | Thursday 05 February 2026 01:53:09 +0000 (0:00:02.244) 0:00:04.360 ***** 2026-02-05 01:53:30.241309 | orchestrator | ok: [testbed-manager] 2026-02-05 01:53:30.241320 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:53:30.241330 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:53:30.241341 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:53:30.241351 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:53:30.241360 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:53:30.241370 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:53:30.241380 | orchestrator | 2026-02-05 01:53:30.241389 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-05 01:53:30.241401 | orchestrator | Thursday 05 February 2026 01:53:11 +0000 (0:00:01.905) 0:00:06.266 ***** 2026-02-05 01:53:30.241414 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-05 01:53:30.241428 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-05 01:53:30.241440 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-05 01:53:30.241452 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-05 01:53:30.241462 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-05 01:53:30.241490 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-05 01:53:30.241501 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-05 01:53:30.241511 | orchestrator | 2026-02-05 01:53:30.241554 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-05 01:53:30.241567 | orchestrator | Thursday 05 February 2026 01:53:12 +0000 (0:00:00.979) 0:00:07.245 ***** 2026-02-05 01:53:30.241578 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 01:53:30.241590 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:53:30.241601 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 01:53:30.241613 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 01:53:30.241624 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 01:53:30.241636 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 01:53:30.241648 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 01:53:30.241659 | orchestrator | 2026-02-05 01:53:30.241670 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-05 01:53:30.241680 | orchestrator | Thursday 05 February 2026 01:53:15 +0000 (0:00:03.342) 0:00:10.587 ***** 2026-02-05 01:53:30.241690 | orchestrator | changed: [testbed-manager] 2026-02-05 01:53:30.241701 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:53:30.241711 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:53:30.241729 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:53:30.241740 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:53:30.241750 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:53:30.241760 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:53:30.241770 | orchestrator | 2026-02-05 01:53:30.241780 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-05 01:53:30.241790 | orchestrator | Thursday 05 February 2026 01:53:17 +0000 (0:00:01.702) 0:00:12.290 ***** 2026-02-05 01:53:30.241800 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 01:53:30.241810 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:53:30.241821 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 01:53:30.241831 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 01:53:30.241842 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 01:53:30.241852 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 01:53:30.241899 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 01:53:30.241912 | orchestrator | 2026-02-05 01:53:30.241923 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-05 01:53:30.241934 | orchestrator | Thursday 05 February 2026 01:53:19 +0000 (0:00:01.937) 0:00:14.227 ***** 2026-02-05 01:53:30.241945 | orchestrator | ok: [testbed-manager] 2026-02-05 01:53:30.241956 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:53:30.241967 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:53:30.241977 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:53:30.241989 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:53:30.241998 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:53:30.242009 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:53:30.242087 | orchestrator | 2026-02-05 01:53:30.242099 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-05 01:53:30.242135 | orchestrator | Thursday 05 February 2026 01:53:20 +0000 (0:00:01.129) 0:00:15.357 ***** 2026-02-05 01:53:30.242146 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:53:30.242155 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:53:30.242163 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:53:30.242173 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:53:30.242183 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:53:30.242192 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:53:30.242201 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:53:30.242211 | orchestrator | 2026-02-05 01:53:30.242221 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-05 01:53:30.242231 | orchestrator | Thursday 05 February 2026 01:53:21 +0000 (0:00:00.633) 0:00:15.990 ***** 2026-02-05 01:53:30.242240 | orchestrator | ok: [testbed-manager] 2026-02-05 01:53:30.242249 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:53:30.242259 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:53:30.242269 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:53:30.242280 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:53:30.242290 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:53:30.242300 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:53:30.242310 | orchestrator | 2026-02-05 01:53:30.242320 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-05 01:53:30.242329 | orchestrator | Thursday 05 February 2026 01:53:23 +0000 (0:00:02.433) 0:00:18.423 ***** 2026-02-05 01:53:30.242339 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:53:30.242348 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:53:30.242357 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:53:30.242366 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:53:30.242375 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:53:30.242385 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:53:30.242395 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-05 01:53:30.242407 | orchestrator | 2026-02-05 01:53:30.242417 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-05 01:53:30.242427 | orchestrator | Thursday 05 February 2026 01:53:24 +0000 (0:00:00.895) 0:00:19.318 ***** 2026-02-05 01:53:30.242436 | orchestrator | ok: [testbed-manager] 2026-02-05 01:53:30.242446 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:53:30.242456 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:53:30.242465 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:53:30.242474 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:53:30.242483 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:53:30.242491 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:53:30.242501 | orchestrator | 2026-02-05 01:53:30.242510 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-05 01:53:30.242519 | orchestrator | Thursday 05 February 2026 01:53:26 +0000 (0:00:01.670) 0:00:20.989 ***** 2026-02-05 01:53:30.242592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:53:30.242614 | orchestrator | 2026-02-05 01:53:30.242623 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-05 01:53:30.242632 | orchestrator | Thursday 05 February 2026 01:53:27 +0000 (0:00:01.234) 0:00:22.224 ***** 2026-02-05 01:53:30.242640 | orchestrator | ok: [testbed-manager] 2026-02-05 01:53:30.242649 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:53:30.242659 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:53:30.242666 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:53:30.242674 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:53:30.242682 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:53:30.242690 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:53:30.242697 | orchestrator | 2026-02-05 01:53:30.242706 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-05 01:53:30.242714 | orchestrator | Thursday 05 February 2026 01:53:28 +0000 (0:00:00.984) 0:00:23.209 ***** 2026-02-05 01:53:30.242722 | orchestrator | ok: [testbed-manager] 2026-02-05 01:53:30.242731 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:53:30.242739 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:53:30.242747 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:53:30.242756 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:53:30.242765 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:53:30.242773 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:53:30.242782 | orchestrator | 2026-02-05 01:53:30.242791 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-05 01:53:30.242801 | orchestrator | Thursday 05 February 2026 01:53:29 +0000 (0:00:00.803) 0:00:24.013 ***** 2026-02-05 01:53:30.242818 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 01:53:30.242829 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 01:53:30.242840 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 01:53:30.242849 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 01:53:30.242858 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 01:53:30.242867 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 01:53:30.242876 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 01:53:30.242885 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 01:53:30.242893 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 01:53:30.242901 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 01:53:30.242910 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 01:53:30.242919 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 01:53:30.242929 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 01:53:30.242936 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 01:53:30.242944 | orchestrator | 2026-02-05 01:53:30.242967 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-05 01:53:46.007134 | orchestrator | Thursday 05 February 2026 01:53:30 +0000 (0:00:01.146) 0:00:25.159 ***** 2026-02-05 01:53:46.007225 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:53:46.007238 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:53:46.007246 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:53:46.007256 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:53:46.007269 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:53:46.007282 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:53:46.007300 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:53:46.007318 | orchestrator | 2026-02-05 01:53:46.007333 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-05 01:53:46.007373 | orchestrator | Thursday 05 February 2026 01:53:30 +0000 (0:00:00.636) 0:00:25.795 ***** 2026-02-05 01:53:46.007389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-5, testbed-node-2, testbed-node-4, testbed-node-3 2026-02-05 01:53:46.007404 | orchestrator | 2026-02-05 01:53:46.007417 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-05 01:53:46.007431 | orchestrator | Thursday 05 February 2026 01:53:35 +0000 (0:00:04.386) 0:00:30.182 ***** 2026-02-05 01:53:46.007447 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007478 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007492 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:46.007507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007614 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:46.007650 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:46.007665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:46.007703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:46.007732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:46.007748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:46.007762 | orchestrator | 2026-02-05 01:53:46.007776 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-05 01:53:46.007793 | orchestrator | Thursday 05 February 2026 01:53:40 +0000 (0:00:05.598) 0:00:35.781 ***** 2026-02-05 01:53:46.007808 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007824 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:46.007840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007899 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-05 01:53:46.007915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:46.007925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:46.007934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:46.007951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:46.007969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:51.356188 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-05 01:53:51.356295 | orchestrator | 2026-02-05 01:53:51.356311 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-05 01:53:51.356323 | orchestrator | Thursday 05 February 2026 01:53:45 +0000 (0:00:05.143) 0:00:40.924 ***** 2026-02-05 01:53:51.356335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:53:51.356344 | orchestrator | 2026-02-05 01:53:51.356354 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-05 01:53:51.356363 | orchestrator | Thursday 05 February 2026 01:53:47 +0000 (0:00:01.049) 0:00:41.974 ***** 2026-02-05 01:53:51.356373 | orchestrator | ok: [testbed-manager] 2026-02-05 01:53:51.356383 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:53:51.356392 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:53:51.356401 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:53:51.356411 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:53:51.356420 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:53:51.356428 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:53:51.356438 | orchestrator | 2026-02-05 01:53:51.356447 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-05 01:53:51.356456 | orchestrator | Thursday 05 February 2026 01:53:48 +0000 (0:00:01.064) 0:00:43.039 ***** 2026-02-05 01:53:51.356465 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 01:53:51.356477 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 01:53:51.356490 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 01:53:51.356504 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 01:53:51.356517 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:53:51.356589 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 01:53:51.356603 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 01:53:51.356617 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 01:53:51.356631 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 01:53:51.356644 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:53:51.356658 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 01:53:51.356672 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 01:53:51.356686 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 01:53:51.356700 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 01:53:51.356716 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:53:51.356731 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 01:53:51.356769 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 01:53:51.356783 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 01:53:51.356796 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 01:53:51.356808 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:53:51.356839 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 01:53:51.356854 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 01:53:51.356867 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 01:53:51.356880 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 01:53:51.356893 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:53:51.356905 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 01:53:51.356917 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 01:53:51.356930 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 01:53:51.356942 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 01:53:51.356955 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:53:51.356968 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 01:53:51.356981 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 01:53:51.356994 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 01:53:51.357008 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 01:53:51.357020 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:53:51.357034 | orchestrator | 2026-02-05 01:53:51.357048 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-05 01:53:51.357081 | orchestrator | Thursday 05 February 2026 01:53:49 +0000 (0:00:01.653) 0:00:44.693 ***** 2026-02-05 01:53:51.357097 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:53:51.357115 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:53:51.357129 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:53:51.357141 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:53:51.357152 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:53:51.357164 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:53:51.357240 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:53:51.357255 | orchestrator | 2026-02-05 01:53:51.357269 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-05 01:53:51.357281 | orchestrator | Thursday 05 February 2026 01:53:50 +0000 (0:00:00.556) 0:00:45.249 ***** 2026-02-05 01:53:51.357294 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:53:51.357308 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:53:51.357320 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:53:51.357332 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:53:51.357345 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:53:51.357356 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:53:51.357369 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:53:51.357382 | orchestrator | 2026-02-05 01:53:51.357395 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:53:51.357409 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:53:51.357425 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:53:51.357439 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:53:51.357471 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:53:51.357487 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:53:51.357501 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:53:51.357513 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:53:51.357556 | orchestrator | 2026-02-05 01:53:51.357570 | orchestrator | 2026-02-05 01:53:51.357584 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:53:51.357597 | orchestrator | Thursday 05 February 2026 01:53:50 +0000 (0:00:00.682) 0:00:45.932 ***** 2026-02-05 01:53:51.357609 | orchestrator | =============================================================================== 2026-02-05 01:53:51.357623 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.60s 2026-02-05 01:53:51.357637 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.14s 2026-02-05 01:53:51.357649 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.39s 2026-02-05 01:53:51.357662 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.34s 2026-02-05 01:53:51.357673 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.43s 2026-02-05 01:53:51.357681 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.24s 2026-02-05 01:53:51.357689 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.94s 2026-02-05 01:53:51.357710 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.91s 2026-02-05 01:53:51.357724 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.70s 2026-02-05 01:53:51.357737 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.67s 2026-02-05 01:53:51.357751 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.65s 2026-02-05 01:53:51.357764 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.23s 2026-02-05 01:53:51.357777 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.15s 2026-02-05 01:53:51.357790 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.15s 2026-02-05 01:53:51.357803 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2026-02-05 01:53:51.357817 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.06s 2026-02-05 01:53:51.357830 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.05s 2026-02-05 01:53:51.357844 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2026-02-05 01:53:51.357856 | orchestrator | osism.commons.network : Create required directories --------------------- 0.98s 2026-02-05 01:53:51.357871 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.90s 2026-02-05 01:53:51.644086 | orchestrator | + osism apply wireguard 2026-02-05 01:54:03.627960 | orchestrator | 2026-02-05 01:54:03 | INFO  | Task f7123d67-81cd-4b62-832e-3a3c62cf93b3 (wireguard) was prepared for execution. 2026-02-05 01:54:03.628062 | orchestrator | 2026-02-05 01:54:03 | INFO  | It takes a moment until task f7123d67-81cd-4b62-832e-3a3c62cf93b3 (wireguard) has been started and output is visible here. 2026-02-05 01:54:23.319356 | orchestrator | 2026-02-05 01:54:23.319495 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-05 01:54:23.319586 | orchestrator | 2026-02-05 01:54:23.319647 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-05 01:54:23.319668 | orchestrator | Thursday 05 February 2026 01:54:07 +0000 (0:00:00.221) 0:00:00.221 ***** 2026-02-05 01:54:23.319688 | orchestrator | ok: [testbed-manager] 2026-02-05 01:54:23.319709 | orchestrator | 2026-02-05 01:54:23.319726 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-05 01:54:23.319744 | orchestrator | Thursday 05 February 2026 01:54:09 +0000 (0:00:01.460) 0:00:01.682 ***** 2026-02-05 01:54:23.319759 | orchestrator | changed: [testbed-manager] 2026-02-05 01:54:23.319778 | orchestrator | 2026-02-05 01:54:23.319804 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-05 01:54:23.319824 | orchestrator | Thursday 05 February 2026 01:54:15 +0000 (0:00:06.464) 0:00:08.146 ***** 2026-02-05 01:54:23.319843 | orchestrator | changed: [testbed-manager] 2026-02-05 01:54:23.319862 | orchestrator | 2026-02-05 01:54:23.319881 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-05 01:54:23.319901 | orchestrator | Thursday 05 February 2026 01:54:16 +0000 (0:00:00.552) 0:00:08.699 ***** 2026-02-05 01:54:23.319920 | orchestrator | changed: [testbed-manager] 2026-02-05 01:54:23.319940 | orchestrator | 2026-02-05 01:54:23.319960 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-05 01:54:23.319981 | orchestrator | Thursday 05 February 2026 01:54:16 +0000 (0:00:00.443) 0:00:09.142 ***** 2026-02-05 01:54:23.320000 | orchestrator | ok: [testbed-manager] 2026-02-05 01:54:23.320022 | orchestrator | 2026-02-05 01:54:23.320041 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-05 01:54:23.320062 | orchestrator | Thursday 05 February 2026 01:54:17 +0000 (0:00:00.687) 0:00:09.830 ***** 2026-02-05 01:54:23.320080 | orchestrator | ok: [testbed-manager] 2026-02-05 01:54:23.320101 | orchestrator | 2026-02-05 01:54:23.320121 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-05 01:54:23.320141 | orchestrator | Thursday 05 February 2026 01:54:17 +0000 (0:00:00.413) 0:00:10.243 ***** 2026-02-05 01:54:23.320160 | orchestrator | ok: [testbed-manager] 2026-02-05 01:54:23.320180 | orchestrator | 2026-02-05 01:54:23.320199 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-05 01:54:23.320221 | orchestrator | Thursday 05 February 2026 01:54:18 +0000 (0:00:00.437) 0:00:10.681 ***** 2026-02-05 01:54:23.320239 | orchestrator | changed: [testbed-manager] 2026-02-05 01:54:23.320254 | orchestrator | 2026-02-05 01:54:23.320265 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-05 01:54:23.320296 | orchestrator | Thursday 05 February 2026 01:54:19 +0000 (0:00:01.171) 0:00:11.853 ***** 2026-02-05 01:54:23.320343 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 01:54:23.320361 | orchestrator | changed: [testbed-manager] 2026-02-05 01:54:23.320379 | orchestrator | 2026-02-05 01:54:23.320396 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-05 01:54:23.320414 | orchestrator | Thursday 05 February 2026 01:54:20 +0000 (0:00:00.920) 0:00:12.773 ***** 2026-02-05 01:54:23.320430 | orchestrator | changed: [testbed-manager] 2026-02-05 01:54:23.320448 | orchestrator | 2026-02-05 01:54:23.320467 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-05 01:54:23.320485 | orchestrator | Thursday 05 February 2026 01:54:22 +0000 (0:00:01.639) 0:00:14.412 ***** 2026-02-05 01:54:23.320504 | orchestrator | changed: [testbed-manager] 2026-02-05 01:54:23.320552 | orchestrator | 2026-02-05 01:54:23.320574 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:54:23.320594 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:54:23.320614 | orchestrator | 2026-02-05 01:54:23.320633 | orchestrator | 2026-02-05 01:54:23.320646 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:54:23.320657 | orchestrator | Thursday 05 February 2026 01:54:22 +0000 (0:00:00.937) 0:00:15.350 ***** 2026-02-05 01:54:23.320684 | orchestrator | =============================================================================== 2026-02-05 01:54:23.320695 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.46s 2026-02-05 01:54:23.320706 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.64s 2026-02-05 01:54:23.320716 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.46s 2026-02-05 01:54:23.320727 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2026-02-05 01:54:23.320738 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2026-02-05 01:54:23.320749 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2026-02-05 01:54:23.320759 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.69s 2026-02-05 01:54:23.320770 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-02-05 01:54:23.320780 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-02-05 01:54:23.320791 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2026-02-05 01:54:23.320802 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2026-02-05 01:54:23.617892 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-05 01:54:23.660481 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-05 01:54:23.660655 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-05 01:54:23.741936 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 185 0 --:--:-- --:--:-- --:--:-- 187 2026-02-05 01:54:23.756664 | orchestrator | + osism apply --environment custom workarounds 2026-02-05 01:54:25.664262 | orchestrator | 2026-02-05 01:54:25 | INFO  | Trying to run play workarounds in environment custom 2026-02-05 01:54:35.775244 | orchestrator | 2026-02-05 01:54:35 | INFO  | Task a60c11de-3c18-46d2-83df-326e69b42fb7 (workarounds) was prepared for execution. 2026-02-05 01:54:35.775369 | orchestrator | 2026-02-05 01:54:35 | INFO  | It takes a moment until task a60c11de-3c18-46d2-83df-326e69b42fb7 (workarounds) has been started and output is visible here. 2026-02-05 01:54:59.641754 | orchestrator | 2026-02-05 01:54:59.641866 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:54:59.641882 | orchestrator | 2026-02-05 01:54:59.641895 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-05 01:54:59.641907 | orchestrator | Thursday 05 February 2026 01:54:39 +0000 (0:00:00.124) 0:00:00.124 ***** 2026-02-05 01:54:59.641918 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-05 01:54:59.641930 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-05 01:54:59.641941 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-05 01:54:59.641951 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-05 01:54:59.641962 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-05 01:54:59.641973 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-05 01:54:59.641984 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-05 01:54:59.641995 | orchestrator | 2026-02-05 01:54:59.642006 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-05 01:54:59.642068 | orchestrator | 2026-02-05 01:54:59.642083 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-05 01:54:59.642096 | orchestrator | Thursday 05 February 2026 01:54:40 +0000 (0:00:00.653) 0:00:00.777 ***** 2026-02-05 01:54:59.642107 | orchestrator | ok: [testbed-manager] 2026-02-05 01:54:59.642120 | orchestrator | 2026-02-05 01:54:59.642132 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-05 01:54:59.642169 | orchestrator | 2026-02-05 01:54:59.642182 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-05 01:54:59.642194 | orchestrator | Thursday 05 February 2026 01:54:42 +0000 (0:00:01.989) 0:00:02.766 ***** 2026-02-05 01:54:59.642206 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:54:59.642218 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:54:59.642230 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:54:59.642242 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:54:59.642253 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:54:59.642265 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:54:59.642276 | orchestrator | 2026-02-05 01:54:59.642291 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-05 01:54:59.642305 | orchestrator | 2026-02-05 01:54:59.642318 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-05 01:54:59.642333 | orchestrator | Thursday 05 February 2026 01:54:44 +0000 (0:00:01.799) 0:00:04.566 ***** 2026-02-05 01:54:59.642346 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 01:54:59.642359 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 01:54:59.642372 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 01:54:59.642384 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 01:54:59.642397 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 01:54:59.642424 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 01:54:59.642437 | orchestrator | 2026-02-05 01:54:59.642449 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-05 01:54:59.642462 | orchestrator | Thursday 05 February 2026 01:54:45 +0000 (0:00:01.489) 0:00:06.056 ***** 2026-02-05 01:54:59.642475 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:54:59.642489 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:54:59.642501 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:54:59.642513 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:54:59.642551 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:54:59.642564 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:54:59.642577 | orchestrator | 2026-02-05 01:54:59.642589 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-05 01:54:59.642602 | orchestrator | Thursday 05 February 2026 01:54:48 +0000 (0:00:02.750) 0:00:08.806 ***** 2026-02-05 01:54:59.642615 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:54:59.642628 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:54:59.642641 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:54:59.642653 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:54:59.642666 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:54:59.642677 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:54:59.642687 | orchestrator | 2026-02-05 01:54:59.642698 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-05 01:54:59.642709 | orchestrator | 2026-02-05 01:54:59.642720 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-05 01:54:59.642731 | orchestrator | Thursday 05 February 2026 01:54:49 +0000 (0:00:00.724) 0:00:09.531 ***** 2026-02-05 01:54:59.642741 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:54:59.642752 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:54:59.642763 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:54:59.642774 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:54:59.642784 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:54:59.642795 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:54:59.642805 | orchestrator | changed: [testbed-manager] 2026-02-05 01:54:59.642816 | orchestrator | 2026-02-05 01:54:59.642827 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-05 01:54:59.642847 | orchestrator | Thursday 05 February 2026 01:54:50 +0000 (0:00:01.472) 0:00:11.003 ***** 2026-02-05 01:54:59.642858 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:54:59.642868 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:54:59.642879 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:54:59.642890 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:54:59.642900 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:54:59.642911 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:54:59.642941 | orchestrator | changed: [testbed-manager] 2026-02-05 01:54:59.642953 | orchestrator | 2026-02-05 01:54:59.642964 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-05 01:54:59.642975 | orchestrator | Thursday 05 February 2026 01:54:52 +0000 (0:00:01.534) 0:00:12.538 ***** 2026-02-05 01:54:59.642985 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:54:59.642996 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:54:59.643007 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:54:59.643018 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:54:59.643029 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:54:59.643039 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:54:59.643050 | orchestrator | ok: [testbed-manager] 2026-02-05 01:54:59.643061 | orchestrator | 2026-02-05 01:54:59.643071 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-05 01:54:59.643082 | orchestrator | Thursday 05 February 2026 01:54:53 +0000 (0:00:01.588) 0:00:14.126 ***** 2026-02-05 01:54:59.643093 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:54:59.643104 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:54:59.643114 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:54:59.643125 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:54:59.643136 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:54:59.643146 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:54:59.643157 | orchestrator | changed: [testbed-manager] 2026-02-05 01:54:59.643167 | orchestrator | 2026-02-05 01:54:59.643178 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-05 01:54:59.643189 | orchestrator | Thursday 05 February 2026 01:54:55 +0000 (0:00:01.785) 0:00:15.912 ***** 2026-02-05 01:54:59.643200 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:54:59.643210 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:54:59.643221 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:54:59.643232 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:54:59.643243 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:54:59.643254 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:54:59.643264 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:54:59.643275 | orchestrator | 2026-02-05 01:54:59.643286 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-05 01:54:59.643297 | orchestrator | 2026-02-05 01:54:59.643308 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-05 01:54:59.643319 | orchestrator | Thursday 05 February 2026 01:54:56 +0000 (0:00:00.608) 0:00:16.520 ***** 2026-02-05 01:54:59.643329 | orchestrator | ok: [testbed-manager] 2026-02-05 01:54:59.643340 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:54:59.643351 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:54:59.643362 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:54:59.643372 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:54:59.643383 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:54:59.643394 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:54:59.643404 | orchestrator | 2026-02-05 01:54:59.643415 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:54:59.643427 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:54:59.643439 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:54:59.643457 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:54:59.643473 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:54:59.643484 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:54:59.643495 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:54:59.643506 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:54:59.643517 | orchestrator | 2026-02-05 01:54:59.643559 | orchestrator | 2026-02-05 01:54:59.643570 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:54:59.643581 | orchestrator | Thursday 05 February 2026 01:54:59 +0000 (0:00:03.475) 0:00:19.996 ***** 2026-02-05 01:54:59.643592 | orchestrator | =============================================================================== 2026-02-05 01:54:59.643602 | orchestrator | Install python3-docker -------------------------------------------------- 3.48s 2026-02-05 01:54:59.643613 | orchestrator | Run update-ca-certificates ---------------------------------------------- 2.75s 2026-02-05 01:54:59.643624 | orchestrator | Apply netplan configuration --------------------------------------------- 1.99s 2026-02-05 01:54:59.643634 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2026-02-05 01:54:59.643645 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.79s 2026-02-05 01:54:59.643656 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.59s 2026-02-05 01:54:59.643667 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.53s 2026-02-05 01:54:59.643683 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2026-02-05 01:54:59.643700 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.47s 2026-02-05 01:54:59.643718 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.72s 2026-02-05 01:54:59.643736 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.65s 2026-02-05 01:54:59.643763 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2026-02-05 01:55:00.388345 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-05 01:55:12.699116 | orchestrator | 2026-02-05 01:55:12 | INFO  | Task bb00b7e8-6071-4664-89a6-cc2cf4018565 (reboot) was prepared for execution. 2026-02-05 01:55:12.699251 | orchestrator | 2026-02-05 01:55:12 | INFO  | It takes a moment until task bb00b7e8-6071-4664-89a6-cc2cf4018565 (reboot) has been started and output is visible here. 2026-02-05 01:55:21.933746 | orchestrator | 2026-02-05 01:55:21.933893 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 01:55:21.933916 | orchestrator | 2026-02-05 01:55:21.933931 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 01:55:21.933984 | orchestrator | Thursday 05 February 2026 01:55:16 +0000 (0:00:00.148) 0:00:00.148 ***** 2026-02-05 01:55:21.933999 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:55:21.934053 | orchestrator | 2026-02-05 01:55:21.934069 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 01:55:21.934083 | orchestrator | Thursday 05 February 2026 01:55:16 +0000 (0:00:00.079) 0:00:00.228 ***** 2026-02-05 01:55:21.934092 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:55:21.934100 | orchestrator | 2026-02-05 01:55:21.934108 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 01:55:21.934116 | orchestrator | Thursday 05 February 2026 01:55:17 +0000 (0:00:00.914) 0:00:01.142 ***** 2026-02-05 01:55:21.934150 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:55:21.934162 | orchestrator | 2026-02-05 01:55:21.934173 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 01:55:21.934184 | orchestrator | 2026-02-05 01:55:21.934196 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 01:55:21.934207 | orchestrator | Thursday 05 February 2026 01:55:17 +0000 (0:00:00.096) 0:00:01.239 ***** 2026-02-05 01:55:21.934218 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:55:21.934230 | orchestrator | 2026-02-05 01:55:21.934241 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 01:55:21.934253 | orchestrator | Thursday 05 February 2026 01:55:17 +0000 (0:00:00.106) 0:00:01.345 ***** 2026-02-05 01:55:21.934264 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:55:21.934274 | orchestrator | 2026-02-05 01:55:21.934286 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 01:55:21.934298 | orchestrator | Thursday 05 February 2026 01:55:18 +0000 (0:00:00.633) 0:00:01.979 ***** 2026-02-05 01:55:21.934310 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:55:21.934321 | orchestrator | 2026-02-05 01:55:21.934332 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 01:55:21.934343 | orchestrator | 2026-02-05 01:55:21.934354 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 01:55:21.934366 | orchestrator | Thursday 05 February 2026 01:55:18 +0000 (0:00:00.102) 0:00:02.081 ***** 2026-02-05 01:55:21.934377 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:55:21.934389 | orchestrator | 2026-02-05 01:55:21.934400 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 01:55:21.934413 | orchestrator | Thursday 05 February 2026 01:55:18 +0000 (0:00:00.146) 0:00:02.228 ***** 2026-02-05 01:55:21.934424 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:55:21.934436 | orchestrator | 2026-02-05 01:55:21.934461 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 01:55:21.934472 | orchestrator | Thursday 05 February 2026 01:55:19 +0000 (0:00:00.664) 0:00:02.892 ***** 2026-02-05 01:55:21.934482 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:55:21.934494 | orchestrator | 2026-02-05 01:55:21.934505 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 01:55:21.934516 | orchestrator | 2026-02-05 01:55:21.934553 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 01:55:21.934564 | orchestrator | Thursday 05 February 2026 01:55:19 +0000 (0:00:00.095) 0:00:02.988 ***** 2026-02-05 01:55:21.934575 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:55:21.934586 | orchestrator | 2026-02-05 01:55:21.934595 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 01:55:21.934602 | orchestrator | Thursday 05 February 2026 01:55:19 +0000 (0:00:00.083) 0:00:03.071 ***** 2026-02-05 01:55:21.934609 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:55:21.934616 | orchestrator | 2026-02-05 01:55:21.934622 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 01:55:21.934629 | orchestrator | Thursday 05 February 2026 01:55:19 +0000 (0:00:00.685) 0:00:03.757 ***** 2026-02-05 01:55:21.934636 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:55:21.934642 | orchestrator | 2026-02-05 01:55:21.934649 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 01:55:21.934656 | orchestrator | 2026-02-05 01:55:21.934662 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 01:55:21.934669 | orchestrator | Thursday 05 February 2026 01:55:20 +0000 (0:00:00.101) 0:00:03.859 ***** 2026-02-05 01:55:21.934676 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:55:21.934683 | orchestrator | 2026-02-05 01:55:21.934689 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 01:55:21.934696 | orchestrator | Thursday 05 February 2026 01:55:20 +0000 (0:00:00.084) 0:00:03.943 ***** 2026-02-05 01:55:21.934711 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:55:21.934718 | orchestrator | 2026-02-05 01:55:21.934725 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 01:55:21.934732 | orchestrator | Thursday 05 February 2026 01:55:20 +0000 (0:00:00.692) 0:00:04.636 ***** 2026-02-05 01:55:21.934739 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:55:21.934745 | orchestrator | 2026-02-05 01:55:21.934765 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 01:55:21.934780 | orchestrator | 2026-02-05 01:55:21.934787 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 01:55:21.934794 | orchestrator | Thursday 05 February 2026 01:55:20 +0000 (0:00:00.099) 0:00:04.735 ***** 2026-02-05 01:55:21.934801 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:55:21.934807 | orchestrator | 2026-02-05 01:55:21.934814 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 01:55:21.934821 | orchestrator | Thursday 05 February 2026 01:55:21 +0000 (0:00:00.088) 0:00:04.824 ***** 2026-02-05 01:55:21.934827 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:55:21.934834 | orchestrator | 2026-02-05 01:55:21.934841 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 01:55:21.934847 | orchestrator | Thursday 05 February 2026 01:55:21 +0000 (0:00:00.663) 0:00:05.487 ***** 2026-02-05 01:55:21.934873 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:55:21.934880 | orchestrator | 2026-02-05 01:55:21.934887 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:55:21.934895 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:55:21.934904 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:55:21.934910 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:55:21.934917 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:55:21.934923 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:55:21.934930 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:55:21.934937 | orchestrator | 2026-02-05 01:55:21.934944 | orchestrator | 2026-02-05 01:55:21.934950 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:55:21.934957 | orchestrator | Thursday 05 February 2026 01:55:21 +0000 (0:00:00.038) 0:00:05.525 ***** 2026-02-05 01:55:21.934964 | orchestrator | =============================================================================== 2026-02-05 01:55:21.934971 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.25s 2026-02-05 01:55:21.934977 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.59s 2026-02-05 01:55:21.934984 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.53s 2026-02-05 01:55:22.118106 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-05 01:55:33.920250 | orchestrator | 2026-02-05 01:55:33 | INFO  | Task 4ee0057d-f696-4781-a8e0-46e58da72513 (wait-for-connection) was prepared for execution. 2026-02-05 01:55:33.920352 | orchestrator | 2026-02-05 01:55:33 | INFO  | It takes a moment until task 4ee0057d-f696-4781-a8e0-46e58da72513 (wait-for-connection) has been started and output is visible here. 2026-02-05 01:55:49.495052 | orchestrator | 2026-02-05 01:55:49.495197 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-05 01:55:49.495258 | orchestrator | 2026-02-05 01:55:49.495279 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-05 01:55:49.495299 | orchestrator | Thursday 05 February 2026 01:55:37 +0000 (0:00:00.204) 0:00:00.204 ***** 2026-02-05 01:55:49.495319 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:55:49.495338 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:55:49.495356 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:55:49.495375 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:55:49.495390 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:55:49.495401 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:55:49.495412 | orchestrator | 2026-02-05 01:55:49.495423 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:55:49.495459 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:55:49.495473 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:55:49.495485 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:55:49.495496 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:55:49.495507 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:55:49.495556 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:55:49.495575 | orchestrator | 2026-02-05 01:55:49.495594 | orchestrator | 2026-02-05 01:55:49.495614 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:55:49.495632 | orchestrator | Thursday 05 February 2026 01:55:49 +0000 (0:00:11.548) 0:00:11.752 ***** 2026-02-05 01:55:49.495652 | orchestrator | =============================================================================== 2026-02-05 01:55:49.495673 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.55s 2026-02-05 01:55:49.718120 | orchestrator | + osism apply hddtemp 2026-02-05 01:56:01.483358 | orchestrator | 2026-02-05 01:56:01 | INFO  | Task a99f7c1e-b534-45fa-9edf-1bf07755348e (hddtemp) was prepared for execution. 2026-02-05 01:56:01.483447 | orchestrator | 2026-02-05 01:56:01 | INFO  | It takes a moment until task a99f7c1e-b534-45fa-9edf-1bf07755348e (hddtemp) has been started and output is visible here. 2026-02-05 01:56:28.833689 | orchestrator | 2026-02-05 01:56:28.833789 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-05 01:56:28.833802 | orchestrator | 2026-02-05 01:56:28.833809 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-05 01:56:28.833817 | orchestrator | Thursday 05 February 2026 01:56:05 +0000 (0:00:00.240) 0:00:00.240 ***** 2026-02-05 01:56:28.833824 | orchestrator | ok: [testbed-manager] 2026-02-05 01:56:28.833832 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:56:28.833838 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:56:28.833844 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:56:28.833851 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:56:28.833857 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:56:28.833863 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:56:28.833869 | orchestrator | 2026-02-05 01:56:28.833876 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-05 01:56:28.833882 | orchestrator | Thursday 05 February 2026 01:56:05 +0000 (0:00:00.610) 0:00:00.850 ***** 2026-02-05 01:56:28.833891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:56:28.833900 | orchestrator | 2026-02-05 01:56:28.833931 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-05 01:56:28.833939 | orchestrator | Thursday 05 February 2026 01:56:06 +0000 (0:00:01.034) 0:00:01.884 ***** 2026-02-05 01:56:28.833946 | orchestrator | ok: [testbed-manager] 2026-02-05 01:56:28.833952 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:56:28.833958 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:56:28.833964 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:56:28.833969 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:56:28.833976 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:56:28.833983 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:56:28.833989 | orchestrator | 2026-02-05 01:56:28.833995 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-05 01:56:28.834003 | orchestrator | Thursday 05 February 2026 01:56:08 +0000 (0:00:01.914) 0:00:03.799 ***** 2026-02-05 01:56:28.834010 | orchestrator | changed: [testbed-manager] 2026-02-05 01:56:28.834061 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:56:28.834065 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:56:28.834069 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:56:28.834073 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:56:28.834077 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:56:28.834081 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:56:28.834085 | orchestrator | 2026-02-05 01:56:28.834089 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-05 01:56:28.834092 | orchestrator | Thursday 05 February 2026 01:56:09 +0000 (0:00:01.035) 0:00:04.835 ***** 2026-02-05 01:56:28.834096 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:56:28.834100 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:56:28.834104 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:56:28.834108 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:56:28.834125 | orchestrator | ok: [testbed-manager] 2026-02-05 01:56:28.834131 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:56:28.834138 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:56:28.834144 | orchestrator | 2026-02-05 01:56:28.834150 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-05 01:56:28.834156 | orchestrator | Thursday 05 February 2026 01:56:11 +0000 (0:00:01.795) 0:00:06.630 ***** 2026-02-05 01:56:28.834162 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:56:28.834169 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:56:28.834175 | orchestrator | changed: [testbed-manager] 2026-02-05 01:56:28.834181 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:56:28.834187 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:56:28.834194 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:56:28.834200 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:56:28.834207 | orchestrator | 2026-02-05 01:56:28.834213 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-05 01:56:28.834219 | orchestrator | Thursday 05 February 2026 01:56:12 +0000 (0:00:00.780) 0:00:07.411 ***** 2026-02-05 01:56:28.834226 | orchestrator | changed: [testbed-manager] 2026-02-05 01:56:28.834230 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:56:28.834235 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:56:28.834239 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:56:28.834244 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:56:28.834248 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:56:28.834253 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:56:28.834257 | orchestrator | 2026-02-05 01:56:28.834262 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-05 01:56:28.834267 | orchestrator | Thursday 05 February 2026 01:56:25 +0000 (0:00:12.717) 0:00:20.129 ***** 2026-02-05 01:56:28.834274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:56:28.834281 | orchestrator | 2026-02-05 01:56:28.834287 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-05 01:56:28.834301 | orchestrator | Thursday 05 February 2026 01:56:26 +0000 (0:00:01.332) 0:00:21.462 ***** 2026-02-05 01:56:28.834308 | orchestrator | changed: [testbed-manager] 2026-02-05 01:56:28.834314 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:56:28.834321 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:56:28.834328 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:56:28.834335 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:56:28.834341 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:56:28.834347 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:56:28.834354 | orchestrator | 2026-02-05 01:56:28.834361 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:56:28.834367 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:56:28.834392 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:56:28.834398 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:56:28.834403 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:56:28.834407 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:56:28.834412 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:56:28.834416 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:56:28.834420 | orchestrator | 2026-02-05 01:56:28.834425 | orchestrator | 2026-02-05 01:56:28.834429 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:56:28.834433 | orchestrator | Thursday 05 February 2026 01:56:28 +0000 (0:00:01.986) 0:00:23.448 ***** 2026-02-05 01:56:28.834438 | orchestrator | =============================================================================== 2026-02-05 01:56:28.834442 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.72s 2026-02-05 01:56:28.834447 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.99s 2026-02-05 01:56:28.834451 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.91s 2026-02-05 01:56:28.834455 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.80s 2026-02-05 01:56:28.834460 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.33s 2026-02-05 01:56:28.834464 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.04s 2026-02-05 01:56:28.834468 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.03s 2026-02-05 01:56:28.834473 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.78s 2026-02-05 01:56:28.834477 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.61s 2026-02-05 01:56:29.113005 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-05 01:56:29.165597 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 01:56:29.165692 | orchestrator | + sudo systemctl restart manager.service 2026-02-05 01:56:42.683462 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-05 01:56:42.683651 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-05 01:56:42.683671 | orchestrator | + local max_attempts=60 2026-02-05 01:56:42.683684 | orchestrator | + local name=ceph-ansible 2026-02-05 01:56:42.683694 | orchestrator | + local attempt_num=1 2026-02-05 01:56:42.683704 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:56:42.724654 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:56:42.724753 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:56:42.724761 | orchestrator | + sleep 5 2026-02-05 01:56:47.730320 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:56:47.749072 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:56:47.749155 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:56:47.749162 | orchestrator | + sleep 5 2026-02-05 01:56:52.752434 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:56:52.785294 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:56:52.785402 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:56:52.785424 | orchestrator | + sleep 5 2026-02-05 01:56:57.790429 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:56:57.825726 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:56:57.825806 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:56:57.825820 | orchestrator | + sleep 5 2026-02-05 01:57:02.830609 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:57:02.865152 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:57:02.865237 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:57:02.865248 | orchestrator | + sleep 5 2026-02-05 01:57:07.868808 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:57:07.914699 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:57:07.914785 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:57:07.914794 | orchestrator | + sleep 5 2026-02-05 01:57:12.919377 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:57:12.958676 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:57:12.958768 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:57:12.958779 | orchestrator | + sleep 5 2026-02-05 01:57:17.963634 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:57:18.007516 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 01:57:18.007606 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:57:18.007619 | orchestrator | + sleep 5 2026-02-05 01:57:23.009369 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:57:23.127850 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 01:57:23.127912 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:57:23.127919 | orchestrator | + sleep 5 2026-02-05 01:57:28.130943 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:57:28.173336 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 01:57:28.173441 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:57:28.173452 | orchestrator | + sleep 5 2026-02-05 01:57:33.179721 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:57:33.204259 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 01:57:33.204344 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:57:33.204355 | orchestrator | + sleep 5 2026-02-05 01:57:38.207765 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:57:38.251386 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 01:57:38.251459 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:57:38.251465 | orchestrator | + sleep 5 2026-02-05 01:57:43.255912 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:57:43.295182 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 01:57:43.295241 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 01:57:43.295249 | orchestrator | + sleep 5 2026-02-05 01:57:48.301302 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 01:57:48.336088 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:57:48.336192 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-05 01:57:48.336202 | orchestrator | + local max_attempts=60 2026-02-05 01:57:48.336211 | orchestrator | + local name=kolla-ansible 2026-02-05 01:57:48.336217 | orchestrator | + local attempt_num=1 2026-02-05 01:57:48.336236 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-05 01:57:48.379038 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:57:48.379140 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-05 01:57:48.379152 | orchestrator | + local max_attempts=60 2026-02-05 01:57:48.379160 | orchestrator | + local name=osism-ansible 2026-02-05 01:57:48.379198 | orchestrator | + local attempt_num=1 2026-02-05 01:57:48.379217 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-05 01:57:48.414974 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 01:57:48.415071 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-05 01:57:48.415083 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-05 01:57:48.578456 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-05 01:57:48.721173 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-05 01:57:48.863556 | orchestrator | ARA in osism-ansible already disabled. 2026-02-05 01:57:49.009733 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-05 01:57:49.010055 | orchestrator | + osism apply gather-facts 2026-02-05 01:58:01.101554 | orchestrator | 2026-02-05 01:58:01 | INFO  | Task f2df0df5-1cef-43b0-844b-d4b9287add90 (gather-facts) was prepared for execution. 2026-02-05 01:58:01.101630 | orchestrator | 2026-02-05 01:58:01 | INFO  | It takes a moment until task f2df0df5-1cef-43b0-844b-d4b9287add90 (gather-facts) has been started and output is visible here. 2026-02-05 01:58:13.505602 | orchestrator | 2026-02-05 01:58:13.505680 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 01:58:13.505688 | orchestrator | 2026-02-05 01:58:13.505692 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 01:58:13.505697 | orchestrator | Thursday 05 February 2026 01:58:05 +0000 (0:00:00.190) 0:00:00.190 ***** 2026-02-05 01:58:13.505701 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:58:13.505707 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:58:13.505711 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:58:13.505715 | orchestrator | ok: [testbed-manager] 2026-02-05 01:58:13.505719 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:58:13.505723 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:58:13.505727 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:58:13.505731 | orchestrator | 2026-02-05 01:58:13.505735 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-05 01:58:13.505739 | orchestrator | 2026-02-05 01:58:13.505743 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-05 01:58:13.505746 | orchestrator | Thursday 05 February 2026 01:58:12 +0000 (0:00:07.535) 0:00:07.726 ***** 2026-02-05 01:58:13.505751 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:58:13.505755 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:58:13.505759 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:58:13.505763 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:58:13.505767 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:58:13.505771 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:58:13.505775 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:58:13.505779 | orchestrator | 2026-02-05 01:58:13.505783 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:58:13.505787 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:58:13.505792 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:58:13.505796 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:58:13.505800 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:58:13.505804 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:58:13.505808 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:58:13.505812 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:58:13.505834 | orchestrator | 2026-02-05 01:58:13.505838 | orchestrator | 2026-02-05 01:58:13.505842 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:58:13.505846 | orchestrator | Thursday 05 February 2026 01:58:13 +0000 (0:00:00.548) 0:00:08.274 ***** 2026-02-05 01:58:13.505850 | orchestrator | =============================================================================== 2026-02-05 01:58:13.505854 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.54s 2026-02-05 01:58:13.505858 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-05 01:58:13.826566 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-05 01:58:13.837701 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-05 01:58:13.863925 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-05 01:58:13.874469 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-05 01:58:13.884283 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-05 01:58:13.894543 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-05 01:58:13.908986 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-05 01:58:13.921223 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-05 01:58:13.936916 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-05 01:58:13.951772 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-05 01:58:13.967568 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-05 01:58:13.985256 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-05 01:58:13.996971 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-05 01:58:14.006824 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-05 01:58:14.026292 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-05 01:58:14.038890 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-05 01:58:14.053852 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-05 01:58:14.066298 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-05 01:58:14.081210 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-05 01:58:14.094920 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-05 01:58:14.107747 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-05 01:58:14.133385 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-05 01:58:14.147068 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-05 01:58:14.166236 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-05 01:58:14.512689 | orchestrator | ok: Runtime: 0:23:37.935908 2026-02-05 01:58:14.613708 | 2026-02-05 01:58:14.613840 | TASK [Deploy services] 2026-02-05 01:58:15.385363 | orchestrator | 2026-02-05 01:58:15.385573 | orchestrator | # DEPLOY SERVICES 2026-02-05 01:58:15.385592 | orchestrator | 2026-02-05 01:58:15.385600 | orchestrator | + set -e 2026-02-05 01:58:15.385607 | orchestrator | + echo 2026-02-05 01:58:15.385616 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-05 01:58:15.385627 | orchestrator | + echo 2026-02-05 01:58:15.385656 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 01:58:15.385673 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 01:58:15.385682 | orchestrator | ++ INTERACTIVE=false 2026-02-05 01:58:15.385690 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 01:58:15.385702 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 01:58:15.385708 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 01:58:15.385718 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 01:58:15.385724 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 01:58:15.385734 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 01:58:15.385739 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 01:58:15.385747 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 01:58:15.385754 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 01:58:15.385764 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 01:58:15.385770 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 01:58:15.385777 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 01:58:15.385784 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 01:58:15.385792 | orchestrator | ++ export ARA=false 2026-02-05 01:58:15.385802 | orchestrator | ++ ARA=false 2026-02-05 01:58:15.385809 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 01:58:15.385816 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 01:58:15.385837 | orchestrator | ++ export TEMPEST=false 2026-02-05 01:58:15.385844 | orchestrator | ++ TEMPEST=false 2026-02-05 01:58:15.385850 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 01:58:15.385856 | orchestrator | ++ IS_ZUUL=true 2026-02-05 01:58:15.385862 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 01:58:15.385869 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 01:58:15.385876 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 01:58:15.385882 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 01:58:15.385894 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 01:58:15.385899 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 01:58:15.385903 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 01:58:15.385906 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 01:58:15.385910 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 01:58:15.385919 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 01:58:15.385923 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-05 01:58:15.394209 | orchestrator | 2026-02-05 01:58:15.394281 | orchestrator | # PULL IMAGES 2026-02-05 01:58:15.394287 | orchestrator | 2026-02-05 01:58:15.394291 | orchestrator | + set -e 2026-02-05 01:58:15.394297 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 01:58:15.394307 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 01:58:15.394315 | orchestrator | ++ INTERACTIVE=false 2026-02-05 01:58:15.394322 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 01:58:15.394328 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 01:58:15.394335 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 01:58:15.394341 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 01:58:15.394356 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 01:58:15.394363 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 01:58:15.394370 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 01:58:15.394376 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 01:58:15.394383 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 01:58:15.394389 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 01:58:15.394396 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 01:58:15.394403 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 01:58:15.394409 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 01:58:15.394415 | orchestrator | ++ export ARA=false 2026-02-05 01:58:15.394422 | orchestrator | ++ ARA=false 2026-02-05 01:58:15.394432 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 01:58:15.394438 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 01:58:15.394443 | orchestrator | ++ export TEMPEST=false 2026-02-05 01:58:15.394449 | orchestrator | ++ TEMPEST=false 2026-02-05 01:58:15.394455 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 01:58:15.394460 | orchestrator | ++ IS_ZUUL=true 2026-02-05 01:58:15.394466 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 01:58:15.394473 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 01:58:15.394478 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 01:58:15.394484 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 01:58:15.394490 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 01:58:15.394496 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 01:58:15.394563 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 01:58:15.394572 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 01:58:15.394579 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 01:58:15.394586 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 01:58:15.394592 | orchestrator | + echo 2026-02-05 01:58:15.394599 | orchestrator | + echo '# PULL IMAGES' 2026-02-05 01:58:15.394606 | orchestrator | + echo 2026-02-05 01:58:15.395020 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-05 01:58:15.446472 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 01:58:15.446584 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-05 01:58:17.360352 | orchestrator | 2026-02-05 01:58:17 | INFO  | Trying to run play pull-images in environment custom 2026-02-05 01:58:27.567689 | orchestrator | 2026-02-05 01:58:27 | INFO  | Task 94a33e36-07c3-44f3-abf5-faac586bc9ae (pull-images) was prepared for execution. 2026-02-05 01:58:27.567795 | orchestrator | 2026-02-05 01:58:27 | INFO  | Task 94a33e36-07c3-44f3-abf5-faac586bc9ae is running in background. No more output. Check ARA for logs. 2026-02-05 01:58:27.762059 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-05 01:58:39.558130 | orchestrator | 2026-02-05 01:58:39 | INFO  | Task 9341730e-f971-4e46-8321-08849581e004 (cgit) was prepared for execution. 2026-02-05 01:58:39.558230 | orchestrator | 2026-02-05 01:58:39 | INFO  | Task 9341730e-f971-4e46-8321-08849581e004 is running in background. No more output. Check ARA for logs. 2026-02-05 01:58:51.806443 | orchestrator | 2026-02-05 01:58:51 | INFO  | Task c7e7976c-0d3e-484f-9872-09b62592b55b (dotfiles) was prepared for execution. 2026-02-05 01:58:51.806556 | orchestrator | 2026-02-05 01:58:51 | INFO  | Task c7e7976c-0d3e-484f-9872-09b62592b55b is running in background. No more output. Check ARA for logs. 2026-02-05 01:59:04.247918 | orchestrator | 2026-02-05 01:59:04 | INFO  | Task ec9dd987-f04c-4c6e-a1d1-ae4d9c71ca9b (homer) was prepared for execution. 2026-02-05 01:59:04.248009 | orchestrator | 2026-02-05 01:59:04 | INFO  | Task ec9dd987-f04c-4c6e-a1d1-ae4d9c71ca9b is running in background. No more output. Check ARA for logs. 2026-02-05 01:59:16.916807 | orchestrator | 2026-02-05 01:59:16 | INFO  | Task 0e211325-4928-4386-a918-d937e9c1fe42 (phpmyadmin) was prepared for execution. 2026-02-05 01:59:16.916877 | orchestrator | 2026-02-05 01:59:16 | INFO  | Task 0e211325-4928-4386-a918-d937e9c1fe42 is running in background. No more output. Check ARA for logs. 2026-02-05 01:59:29.254746 | orchestrator | 2026-02-05 01:59:29 | INFO  | Task df2a0862-bd78-4cf2-9af9-c73d7993b8dd (sosreport) was prepared for execution. 2026-02-05 01:59:29.254821 | orchestrator | 2026-02-05 01:59:29 | INFO  | Task df2a0862-bd78-4cf2-9af9-c73d7993b8dd is running in background. No more output. Check ARA for logs. 2026-02-05 01:59:29.755176 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-05 01:59:29.760101 | orchestrator | + set -e 2026-02-05 01:59:29.760174 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 01:59:29.760181 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 01:59:29.760186 | orchestrator | ++ INTERACTIVE=false 2026-02-05 01:59:29.760192 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 01:59:29.760196 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 01:59:29.762176 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 01:59:29.762225 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 01:59:29.762234 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 01:59:29.762240 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 01:59:29.762247 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 01:59:29.762292 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 01:59:29.762300 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 01:59:29.762304 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 01:59:29.762309 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 01:59:29.762313 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 01:59:29.762351 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 01:59:29.762357 | orchestrator | ++ export ARA=false 2026-02-05 01:59:29.762361 | orchestrator | ++ ARA=false 2026-02-05 01:59:29.762365 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 01:59:29.762408 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 01:59:29.762452 | orchestrator | ++ export TEMPEST=false 2026-02-05 01:59:29.762457 | orchestrator | ++ TEMPEST=false 2026-02-05 01:59:29.762461 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 01:59:29.762465 | orchestrator | ++ IS_ZUUL=true 2026-02-05 01:59:29.762482 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 01:59:29.762490 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 01:59:29.762494 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 01:59:29.762498 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 01:59:29.762502 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 01:59:29.762505 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 01:59:29.762509 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 01:59:29.762536 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 01:59:29.762543 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 01:59:29.762549 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 01:59:29.765898 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-05 01:59:29.835868 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 01:59:29.835913 | orchestrator | + osism apply frr 2026-02-05 01:59:42.926189 | orchestrator | 2026-02-05 01:59:42 | INFO  | Task eacf7c05-6c66-4fd5-8175-85461f587ad7 (frr) was prepared for execution. 2026-02-05 01:59:42.926249 | orchestrator | 2026-02-05 01:59:42 | INFO  | It takes a moment until task eacf7c05-6c66-4fd5-8175-85461f587ad7 (frr) has been started and output is visible here. 2026-02-05 02:00:10.713771 | orchestrator | 2026-02-05 02:00:10.713822 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-05 02:00:10.713829 | orchestrator | 2026-02-05 02:00:10.713834 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-05 02:00:10.713842 | orchestrator | Thursday 05 February 2026 01:59:47 +0000 (0:00:00.361) 0:00:00.361 ***** 2026-02-05 02:00:10.713846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 02:00:10.713851 | orchestrator | 2026-02-05 02:00:10.713855 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-05 02:00:10.713859 | orchestrator | Thursday 05 February 2026 01:59:47 +0000 (0:00:00.200) 0:00:00.562 ***** 2026-02-05 02:00:10.713863 | orchestrator | changed: [testbed-manager] 2026-02-05 02:00:10.713867 | orchestrator | 2026-02-05 02:00:10.713871 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-05 02:00:10.713876 | orchestrator | Thursday 05 February 2026 01:59:48 +0000 (0:00:01.119) 0:00:01.681 ***** 2026-02-05 02:00:10.713880 | orchestrator | changed: [testbed-manager] 2026-02-05 02:00:10.713884 | orchestrator | 2026-02-05 02:00:10.713887 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-05 02:00:10.713891 | orchestrator | Thursday 05 February 2026 01:59:58 +0000 (0:00:10.021) 0:00:11.702 ***** 2026-02-05 02:00:10.713895 | orchestrator | ok: [testbed-manager] 2026-02-05 02:00:10.713899 | orchestrator | 2026-02-05 02:00:10.713903 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-05 02:00:10.713907 | orchestrator | Thursday 05 February 2026 01:59:59 +0000 (0:00:00.989) 0:00:12.691 ***** 2026-02-05 02:00:10.713911 | orchestrator | changed: [testbed-manager] 2026-02-05 02:00:10.713915 | orchestrator | 2026-02-05 02:00:10.713918 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-05 02:00:10.713922 | orchestrator | Thursday 05 February 2026 02:00:00 +0000 (0:00:00.975) 0:00:13.666 ***** 2026-02-05 02:00:10.713926 | orchestrator | ok: [testbed-manager] 2026-02-05 02:00:10.713930 | orchestrator | 2026-02-05 02:00:10.713934 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-05 02:00:10.713938 | orchestrator | Thursday 05 February 2026 02:00:02 +0000 (0:00:01.167) 0:00:14.834 ***** 2026-02-05 02:00:10.713942 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:00:10.713946 | orchestrator | 2026-02-05 02:00:10.713949 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-05 02:00:10.713953 | orchestrator | Thursday 05 February 2026 02:00:02 +0000 (0:00:00.149) 0:00:14.983 ***** 2026-02-05 02:00:10.713967 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:00:10.713971 | orchestrator | 2026-02-05 02:00:10.713975 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-05 02:00:10.713979 | orchestrator | Thursday 05 February 2026 02:00:02 +0000 (0:00:00.176) 0:00:15.159 ***** 2026-02-05 02:00:10.713982 | orchestrator | changed: [testbed-manager] 2026-02-05 02:00:10.713986 | orchestrator | 2026-02-05 02:00:10.713990 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-05 02:00:10.713994 | orchestrator | Thursday 05 February 2026 02:00:03 +0000 (0:00:00.853) 0:00:16.013 ***** 2026-02-05 02:00:10.713998 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-05 02:00:10.714001 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-05 02:00:10.714006 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-05 02:00:10.714010 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-05 02:00:10.714038 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-05 02:00:10.714042 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-05 02:00:10.714046 | orchestrator | 2026-02-05 02:00:10.714050 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-05 02:00:10.714054 | orchestrator | Thursday 05 February 2026 02:00:06 +0000 (0:00:03.565) 0:00:19.579 ***** 2026-02-05 02:00:10.714058 | orchestrator | ok: [testbed-manager] 2026-02-05 02:00:10.714061 | orchestrator | 2026-02-05 02:00:10.714065 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-05 02:00:10.714069 | orchestrator | Thursday 05 February 2026 02:00:08 +0000 (0:00:01.719) 0:00:21.299 ***** 2026-02-05 02:00:10.714072 | orchestrator | changed: [testbed-manager] 2026-02-05 02:00:10.714076 | orchestrator | 2026-02-05 02:00:10.714080 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:00:10.714084 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:00:10.714088 | orchestrator | 2026-02-05 02:00:10.714096 | orchestrator | 2026-02-05 02:00:10.714103 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:00:10.714107 | orchestrator | Thursday 05 February 2026 02:00:10 +0000 (0:00:01.772) 0:00:23.071 ***** 2026-02-05 02:00:10.714111 | orchestrator | =============================================================================== 2026-02-05 02:00:10.714115 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.02s 2026-02-05 02:00:10.714118 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.57s 2026-02-05 02:00:10.714122 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.77s 2026-02-05 02:00:10.714126 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.72s 2026-02-05 02:00:10.714129 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.17s 2026-02-05 02:00:10.714140 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.12s 2026-02-05 02:00:10.714144 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.99s 2026-02-05 02:00:10.714148 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.98s 2026-02-05 02:00:10.714152 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.85s 2026-02-05 02:00:10.714155 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-02-05 02:00:10.714159 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-02-05 02:00:10.714163 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-02-05 02:00:11.034728 | orchestrator | + osism apply kubernetes 2026-02-05 02:00:13.131837 | orchestrator | 2026-02-05 02:00:13 | INFO  | Task d4bb46fc-20ed-4113-a3e7-421ecd0bca09 (kubernetes) was prepared for execution. 2026-02-05 02:00:13.131897 | orchestrator | 2026-02-05 02:00:13 | INFO  | It takes a moment until task d4bb46fc-20ed-4113-a3e7-421ecd0bca09 (kubernetes) has been started and output is visible here. 2026-02-05 02:00:42.354057 | orchestrator | 2026-02-05 02:00:42.354134 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-05 02:00:42.354143 | orchestrator | 2026-02-05 02:00:42.354148 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-05 02:00:42.354153 | orchestrator | Thursday 05 February 2026 02:00:18 +0000 (0:00:00.275) 0:00:00.275 ***** 2026-02-05 02:00:42.354157 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:00:42.354162 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:00:42.354166 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:00:42.354170 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:00:42.354174 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:00:42.354178 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:00:42.354182 | orchestrator | 2026-02-05 02:00:42.354186 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-05 02:00:42.354190 | orchestrator | Thursday 05 February 2026 02:00:18 +0000 (0:00:00.770) 0:00:01.045 ***** 2026-02-05 02:00:42.354194 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:00:42.354198 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:00:42.354202 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:00:42.354206 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:00:42.354212 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:00:42.354218 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:00:42.354224 | orchestrator | 2026-02-05 02:00:42.354230 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-05 02:00:42.354238 | orchestrator | Thursday 05 February 2026 02:00:19 +0000 (0:00:00.477) 0:00:01.523 ***** 2026-02-05 02:00:42.354245 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:00:42.354251 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:00:42.354258 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:00:42.354264 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:00:42.354270 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:00:42.354277 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:00:42.354283 | orchestrator | 2026-02-05 02:00:42.354290 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-05 02:00:42.354296 | orchestrator | Thursday 05 February 2026 02:00:19 +0000 (0:00:00.502) 0:00:02.026 ***** 2026-02-05 02:00:42.354300 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:00:42.354303 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:00:42.354307 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:00:42.354313 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:00:42.354317 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:00:42.354321 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:00:42.354325 | orchestrator | 2026-02-05 02:00:42.354329 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-05 02:00:42.354333 | orchestrator | Thursday 05 February 2026 02:00:22 +0000 (0:00:02.241) 0:00:04.267 ***** 2026-02-05 02:00:42.354337 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:00:42.354341 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:00:42.354345 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:00:42.354349 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:00:42.354353 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:00:42.354359 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:00:42.354365 | orchestrator | 2026-02-05 02:00:42.354371 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-05 02:00:42.354377 | orchestrator | Thursday 05 February 2026 02:00:22 +0000 (0:00:00.868) 0:00:05.136 ***** 2026-02-05 02:00:42.354383 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:00:42.354409 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:00:42.354416 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:00:42.354422 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:00:42.354430 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:00:42.354434 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:00:42.354437 | orchestrator | 2026-02-05 02:00:42.354449 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-05 02:00:42.354453 | orchestrator | Thursday 05 February 2026 02:00:24 +0000 (0:00:01.421) 0:00:06.557 ***** 2026-02-05 02:00:42.354457 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:00:42.354460 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:00:42.354464 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:00:42.354468 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:00:42.354472 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:00:42.354476 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:00:42.354479 | orchestrator | 2026-02-05 02:00:42.354483 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-05 02:00:42.354487 | orchestrator | Thursday 05 February 2026 02:00:24 +0000 (0:00:00.490) 0:00:07.048 ***** 2026-02-05 02:00:42.354491 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:00:42.354494 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:00:42.354498 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:00:42.354502 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:00:42.354506 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:00:42.354529 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:00:42.354534 | orchestrator | 2026-02-05 02:00:42.354537 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-05 02:00:42.354541 | orchestrator | Thursday 05 February 2026 02:00:25 +0000 (0:00:00.583) 0:00:07.632 ***** 2026-02-05 02:00:42.354547 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 02:00:42.354553 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 02:00:42.354559 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:00:42.354565 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 02:00:42.354571 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 02:00:42.354577 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:00:42.354583 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 02:00:42.354589 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 02:00:42.354596 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:00:42.354601 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 02:00:42.354620 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 02:00:42.354625 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:00:42.354630 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 02:00:42.354635 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 02:00:42.354639 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:00:42.354644 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 02:00:42.354648 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 02:00:42.354653 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:00:42.354658 | orchestrator | 2026-02-05 02:00:42.354662 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-05 02:00:42.354666 | orchestrator | Thursday 05 February 2026 02:00:25 +0000 (0:00:00.476) 0:00:08.108 ***** 2026-02-05 02:00:42.354670 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:00:42.354675 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:00:42.354680 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:00:42.354688 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:00:42.354693 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:00:42.354697 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:00:42.354702 | orchestrator | 2026-02-05 02:00:42.354706 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-05 02:00:42.354712 | orchestrator | Thursday 05 February 2026 02:00:26 +0000 (0:00:00.957) 0:00:09.066 ***** 2026-02-05 02:00:42.354717 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:00:42.354722 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:00:42.354726 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:00:42.354730 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:00:42.354735 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:00:42.354739 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:00:42.354744 | orchestrator | 2026-02-05 02:00:42.354749 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-05 02:00:42.354753 | orchestrator | Thursday 05 February 2026 02:00:27 +0000 (0:00:01.043) 0:00:10.110 ***** 2026-02-05 02:00:42.354758 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:00:42.354762 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:00:42.354767 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:00:42.354771 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:00:42.354776 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:00:42.354780 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:00:42.354784 | orchestrator | 2026-02-05 02:00:42.354789 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-05 02:00:42.354793 | orchestrator | Thursday 05 February 2026 02:00:38 +0000 (0:00:10.853) 0:00:20.963 ***** 2026-02-05 02:00:42.354798 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:00:42.354805 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:00:42.354810 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:00:42.354815 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:00:42.354819 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:00:42.354824 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:00:42.354828 | orchestrator | 2026-02-05 02:00:42.354832 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-05 02:00:42.354836 | orchestrator | Thursday 05 February 2026 02:00:39 +0000 (0:00:00.769) 0:00:21.732 ***** 2026-02-05 02:00:42.354840 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:00:42.354844 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:00:42.354847 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:00:42.354851 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:00:42.354855 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:00:42.354858 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:00:42.354862 | orchestrator | 2026-02-05 02:00:42.354866 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-05 02:00:42.354871 | orchestrator | Thursday 05 February 2026 02:00:40 +0000 (0:00:01.362) 0:00:23.095 ***** 2026-02-05 02:00:42.354875 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:00:42.354879 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:00:42.354882 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:00:42.354886 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:00:42.354890 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:00:42.354893 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:00:42.354897 | orchestrator | 2026-02-05 02:00:42.354901 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-05 02:00:42.354905 | orchestrator | Thursday 05 February 2026 02:00:41 +0000 (0:00:00.709) 0:00:23.804 ***** 2026-02-05 02:00:42.354909 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-05 02:00:42.354916 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-05 02:00:42.354920 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:00:42.354924 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-05 02:00:42.354931 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-05 02:00:42.354935 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:00:42.354939 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-05 02:00:42.354942 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-05 02:00:42.354946 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:00:42.354950 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-05 02:00:42.354954 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-05 02:00:42.354958 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:00:42.354961 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-05 02:00:42.354965 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-05 02:00:42.354969 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:00:42.354973 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-05 02:00:42.354976 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-05 02:00:42.354980 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:00:42.354984 | orchestrator | 2026-02-05 02:00:42.354988 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-05 02:00:42.354994 | orchestrator | Thursday 05 February 2026 02:00:42 +0000 (0:00:00.744) 0:00:24.549 ***** 2026-02-05 02:01:55.403282 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:01:55.403343 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:01:55.403353 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:01:55.403360 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:01:55.403367 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:01:55.403412 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:01:55.403419 | orchestrator | 2026-02-05 02:01:55.403427 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-05 02:01:55.403436 | orchestrator | Thursday 05 February 2026 02:00:42 +0000 (0:00:00.617) 0:00:25.167 ***** 2026-02-05 02:01:55.403445 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:01:55.403451 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:01:55.403457 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:01:55.403463 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:01:55.403469 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:01:55.403476 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:01:55.403485 | orchestrator | 2026-02-05 02:01:55.403493 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-05 02:01:55.403499 | orchestrator | 2026-02-05 02:01:55.403539 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-05 02:01:55.403548 | orchestrator | Thursday 05 February 2026 02:00:44 +0000 (0:00:01.379) 0:00:26.547 ***** 2026-02-05 02:01:55.403555 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:01:55.403562 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:01:55.403572 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:01:55.403579 | orchestrator | 2026-02-05 02:01:55.403585 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-05 02:01:55.403591 | orchestrator | Thursday 05 February 2026 02:00:45 +0000 (0:00:01.017) 0:00:27.564 ***** 2026-02-05 02:01:55.403597 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:01:55.403604 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:01:55.403610 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:01:55.403620 | orchestrator | 2026-02-05 02:01:55.403627 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-05 02:01:55.403633 | orchestrator | Thursday 05 February 2026 02:00:46 +0000 (0:00:01.558) 0:00:29.123 ***** 2026-02-05 02:01:55.403639 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:01:55.403645 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:01:55.403651 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:01:55.403658 | orchestrator | 2026-02-05 02:01:55.403664 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-05 02:01:55.403683 | orchestrator | Thursday 05 February 2026 02:00:47 +0000 (0:00:00.853) 0:00:29.976 ***** 2026-02-05 02:01:55.403691 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:01:55.403697 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:01:55.403704 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:01:55.403710 | orchestrator | 2026-02-05 02:01:55.403719 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-05 02:01:55.403727 | orchestrator | Thursday 05 February 2026 02:00:48 +0000 (0:00:00.670) 0:00:30.647 ***** 2026-02-05 02:01:55.403733 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:01:55.403739 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:01:55.403746 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:01:55.403752 | orchestrator | 2026-02-05 02:01:55.403758 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-05 02:01:55.403772 | orchestrator | Thursday 05 February 2026 02:00:48 +0000 (0:00:00.346) 0:00:30.994 ***** 2026-02-05 02:01:55.403781 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:01:55.403787 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:01:55.403793 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:01:55.403799 | orchestrator | 2026-02-05 02:01:55.403805 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-05 02:01:55.403812 | orchestrator | Thursday 05 February 2026 02:00:49 +0000 (0:00:00.686) 0:00:31.681 ***** 2026-02-05 02:01:55.403821 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:01:55.403828 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:01:55.403834 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:01:55.403841 | orchestrator | 2026-02-05 02:01:55.403847 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-05 02:01:55.403853 | orchestrator | Thursday 05 February 2026 02:00:51 +0000 (0:00:01.534) 0:00:33.216 ***** 2026-02-05 02:01:55.403859 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:01:55.403865 | orchestrator | 2026-02-05 02:01:55.403873 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-05 02:01:55.403882 | orchestrator | Thursday 05 February 2026 02:00:51 +0000 (0:00:00.519) 0:00:33.735 ***** 2026-02-05 02:01:55.403888 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:01:55.403894 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:01:55.403900 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:01:55.403906 | orchestrator | 2026-02-05 02:01:55.403912 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-05 02:01:55.403921 | orchestrator | Thursday 05 February 2026 02:00:53 +0000 (0:00:01.599) 0:00:35.335 ***** 2026-02-05 02:01:55.403929 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:01:55.403935 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:01:55.403941 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:01:55.403948 | orchestrator | 2026-02-05 02:01:55.403954 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-05 02:01:55.403960 | orchestrator | Thursday 05 February 2026 02:00:53 +0000 (0:00:00.679) 0:00:36.015 ***** 2026-02-05 02:01:55.403966 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:01:55.403973 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:01:55.403983 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:01:55.403989 | orchestrator | 2026-02-05 02:01:55.403995 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-05 02:01:55.404002 | orchestrator | Thursday 05 February 2026 02:00:54 +0000 (0:00:00.996) 0:00:37.011 ***** 2026-02-05 02:01:55.404008 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:01:55.404014 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:01:55.404020 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:01:55.404030 | orchestrator | 2026-02-05 02:01:55.404036 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-05 02:01:55.404054 | orchestrator | Thursday 05 February 2026 02:00:56 +0000 (0:00:01.271) 0:00:38.282 ***** 2026-02-05 02:01:55.404060 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:01:55.404071 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:01:55.404080 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:01:55.404088 | orchestrator | 2026-02-05 02:01:55.404094 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-05 02:01:55.404100 | orchestrator | Thursday 05 February 2026 02:00:56 +0000 (0:00:00.304) 0:00:38.587 ***** 2026-02-05 02:01:55.404106 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:01:55.404112 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:01:55.404118 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:01:55.404127 | orchestrator | 2026-02-05 02:01:55.404135 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-05 02:01:55.404141 | orchestrator | Thursday 05 February 2026 02:00:56 +0000 (0:00:00.505) 0:00:39.092 ***** 2026-02-05 02:01:55.404148 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:01:55.404154 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:01:55.404160 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:01:55.404166 | orchestrator | 2026-02-05 02:01:55.404175 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-05 02:01:55.404185 | orchestrator | Thursday 05 February 2026 02:00:57 +0000 (0:00:01.093) 0:00:40.186 ***** 2026-02-05 02:01:55.404192 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:01:55.404198 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:01:55.404204 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:01:55.404210 | orchestrator | 2026-02-05 02:01:55.404216 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-05 02:01:55.404223 | orchestrator | Thursday 05 February 2026 02:01:00 +0000 (0:00:02.439) 0:00:42.625 ***** 2026-02-05 02:01:55.404232 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:01:55.404239 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:01:55.404246 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:01:55.404258 | orchestrator | 2026-02-05 02:01:55.404265 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-05 02:01:55.404272 | orchestrator | Thursday 05 February 2026 02:01:00 +0000 (0:00:00.313) 0:00:42.938 ***** 2026-02-05 02:01:55.404279 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-05 02:01:55.404286 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-05 02:01:55.404292 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-05 02:01:55.404299 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-05 02:01:55.404308 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-05 02:01:55.404315 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-05 02:01:55.404321 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-05 02:01:55.404327 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-05 02:01:55.404333 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-05 02:01:55.404339 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-05 02:01:55.404346 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-05 02:01:55.404359 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-05 02:01:55.404368 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-05 02:01:55.404374 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-05 02:01:55.404380 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-05 02:01:55.404386 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:01:55.404393 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:01:55.404399 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:01:55.404405 | orchestrator | 2026-02-05 02:01:55.404418 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-05 02:01:55.404425 | orchestrator | Thursday 05 February 2026 02:01:53 +0000 (0:00:53.259) 0:01:36.198 ***** 2026-02-05 02:01:55.404432 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:01:55.404438 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:01:55.404444 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:01:55.404450 | orchestrator | 2026-02-05 02:01:55.404456 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-05 02:01:55.404462 | orchestrator | Thursday 05 February 2026 02:01:54 +0000 (0:00:00.441) 0:01:36.639 ***** 2026-02-05 02:01:55.404475 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:02:34.728224 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:02:34.728275 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:02:34.728281 | orchestrator | 2026-02-05 02:02:34.728292 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-05 02:02:34.728297 | orchestrator | Thursday 05 February 2026 02:01:55 +0000 (0:00:00.958) 0:01:37.597 ***** 2026-02-05 02:02:34.728301 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:02:34.728305 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:02:34.728309 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:02:34.728312 | orchestrator | 2026-02-05 02:02:34.728322 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-05 02:02:34.728327 | orchestrator | Thursday 05 February 2026 02:01:56 +0000 (0:00:01.074) 0:01:38.672 ***** 2026-02-05 02:02:34.728335 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:02:34.728339 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:02:34.728342 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:02:34.728346 | orchestrator | 2026-02-05 02:02:34.728350 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-05 02:02:34.728354 | orchestrator | Thursday 05 February 2026 02:02:21 +0000 (0:00:24.597) 0:02:03.270 ***** 2026-02-05 02:02:34.728358 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:02:34.728362 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:02:34.728366 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:02:34.728370 | orchestrator | 2026-02-05 02:02:34.728374 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-05 02:02:34.728377 | orchestrator | Thursday 05 February 2026 02:02:21 +0000 (0:00:00.780) 0:02:04.051 ***** 2026-02-05 02:02:34.728381 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:02:34.728385 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:02:34.728389 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:02:34.728393 | orchestrator | 2026-02-05 02:02:34.728397 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-05 02:02:34.728400 | orchestrator | Thursday 05 February 2026 02:02:22 +0000 (0:00:00.612) 0:02:04.663 ***** 2026-02-05 02:02:34.728404 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:02:34.728408 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:02:34.728412 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:02:34.728416 | orchestrator | 2026-02-05 02:02:34.728419 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-05 02:02:34.728436 | orchestrator | Thursday 05 February 2026 02:02:23 +0000 (0:00:00.577) 0:02:05.241 ***** 2026-02-05 02:02:34.728440 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:02:34.728444 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:02:34.728448 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:02:34.728451 | orchestrator | 2026-02-05 02:02:34.728455 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-05 02:02:34.728459 | orchestrator | Thursday 05 February 2026 02:02:23 +0000 (0:00:00.582) 0:02:05.823 ***** 2026-02-05 02:02:34.728463 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:02:34.728467 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:02:34.728470 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:02:34.728474 | orchestrator | 2026-02-05 02:02:34.728478 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-05 02:02:34.728482 | orchestrator | Thursday 05 February 2026 02:02:24 +0000 (0:00:00.427) 0:02:06.251 ***** 2026-02-05 02:02:34.728485 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:02:34.728489 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:02:34.728493 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:02:34.728497 | orchestrator | 2026-02-05 02:02:34.728532 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-05 02:02:34.728539 | orchestrator | Thursday 05 February 2026 02:02:24 +0000 (0:00:00.615) 0:02:06.866 ***** 2026-02-05 02:02:34.728549 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:02:34.728563 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:02:34.728575 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:02:34.728581 | orchestrator | 2026-02-05 02:02:34.728588 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-05 02:02:34.728594 | orchestrator | Thursday 05 February 2026 02:02:25 +0000 (0:00:00.563) 0:02:07.429 ***** 2026-02-05 02:02:34.728601 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:02:34.728608 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:02:34.728614 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:02:34.728621 | orchestrator | 2026-02-05 02:02:34.728628 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-05 02:02:34.728634 | orchestrator | Thursday 05 February 2026 02:02:26 +0000 (0:00:00.856) 0:02:08.286 ***** 2026-02-05 02:02:34.728642 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:02:34.728649 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:02:34.728656 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:02:34.728663 | orchestrator | 2026-02-05 02:02:34.728670 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-05 02:02:34.728674 | orchestrator | Thursday 05 February 2026 02:02:26 +0000 (0:00:00.798) 0:02:09.084 ***** 2026-02-05 02:02:34.728678 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:02:34.728682 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:02:34.728686 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:02:34.728690 | orchestrator | 2026-02-05 02:02:34.728693 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-05 02:02:34.728697 | orchestrator | Thursday 05 February 2026 02:02:27 +0000 (0:00:00.504) 0:02:09.589 ***** 2026-02-05 02:02:34.728701 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:02:34.728705 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:02:34.728709 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:02:34.728712 | orchestrator | 2026-02-05 02:02:34.728716 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-05 02:02:34.728720 | orchestrator | Thursday 05 February 2026 02:02:27 +0000 (0:00:00.295) 0:02:09.884 ***** 2026-02-05 02:02:34.728724 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:02:34.728727 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:02:34.728731 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:02:34.728735 | orchestrator | 2026-02-05 02:02:34.728739 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-05 02:02:34.728742 | orchestrator | Thursday 05 February 2026 02:02:28 +0000 (0:00:00.617) 0:02:10.501 ***** 2026-02-05 02:02:34.728752 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:02:34.728756 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:02:34.728769 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:02:34.728773 | orchestrator | 2026-02-05 02:02:34.728777 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-05 02:02:34.728782 | orchestrator | Thursday 05 February 2026 02:02:28 +0000 (0:00:00.624) 0:02:11.126 ***** 2026-02-05 02:02:34.728786 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-05 02:02:34.728790 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-05 02:02:34.728793 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-05 02:02:34.728797 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-05 02:02:34.728801 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-05 02:02:34.728805 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-05 02:02:34.728809 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-05 02:02:34.728813 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-05 02:02:34.728817 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-05 02:02:34.728820 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-05 02:02:34.728825 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-05 02:02:34.728830 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-05 02:02:34.728834 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-05 02:02:34.728839 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-05 02:02:34.728843 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-05 02:02:34.728848 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-05 02:02:34.728852 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-05 02:02:34.728857 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-05 02:02:34.728861 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-05 02:02:34.728866 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-05 02:02:34.728870 | orchestrator | 2026-02-05 02:02:34.728875 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-05 02:02:34.728880 | orchestrator | 2026-02-05 02:02:34.728884 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-05 02:02:34.728889 | orchestrator | Thursday 05 February 2026 02:02:31 +0000 (0:00:02.974) 0:02:14.100 ***** 2026-02-05 02:02:34.728893 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:02:34.728898 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:02:34.728902 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:02:34.728906 | orchestrator | 2026-02-05 02:02:34.728918 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-05 02:02:34.728922 | orchestrator | Thursday 05 February 2026 02:02:32 +0000 (0:00:00.312) 0:02:14.413 ***** 2026-02-05 02:02:34.728925 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:02:34.728929 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:02:34.728933 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:02:34.728939 | orchestrator | 2026-02-05 02:02:34.728943 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-05 02:02:34.728947 | orchestrator | Thursday 05 February 2026 02:02:32 +0000 (0:00:00.619) 0:02:15.033 ***** 2026-02-05 02:02:34.728950 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:02:34.728954 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:02:34.728958 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:02:34.728962 | orchestrator | 2026-02-05 02:02:34.728965 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-05 02:02:34.728969 | orchestrator | Thursday 05 February 2026 02:02:33 +0000 (0:00:00.484) 0:02:15.518 ***** 2026-02-05 02:02:34.728973 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:02:34.728977 | orchestrator | 2026-02-05 02:02:34.728980 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-05 02:02:34.728984 | orchestrator | Thursday 05 February 2026 02:02:33 +0000 (0:00:00.456) 0:02:15.975 ***** 2026-02-05 02:02:34.728988 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:02:34.728992 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:02:34.728996 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:02:34.728999 | orchestrator | 2026-02-05 02:02:34.729003 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-05 02:02:34.729007 | orchestrator | Thursday 05 February 2026 02:02:34 +0000 (0:00:00.334) 0:02:16.309 ***** 2026-02-05 02:02:34.729011 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:02:34.729014 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:02:34.729018 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:02:34.729022 | orchestrator | 2026-02-05 02:02:34.729026 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-05 02:02:34.729030 | orchestrator | Thursday 05 February 2026 02:02:34 +0000 (0:00:00.452) 0:02:16.762 ***** 2026-02-05 02:02:34.729035 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:04:10.697349 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:04:10.697421 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:04:10.697432 | orchestrator | 2026-02-05 02:04:10.697440 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-05 02:04:10.697448 | orchestrator | Thursday 05 February 2026 02:02:34 +0000 (0:00:00.300) 0:02:17.062 ***** 2026-02-05 02:04:10.697454 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:04:10.697461 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:04:10.697468 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:04:10.697474 | orchestrator | 2026-02-05 02:04:10.697481 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-05 02:04:10.697563 | orchestrator | Thursday 05 February 2026 02:02:35 +0000 (0:00:00.613) 0:02:17.675 ***** 2026-02-05 02:04:10.697570 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:04:10.697576 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:04:10.697582 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:04:10.697588 | orchestrator | 2026-02-05 02:04:10.697594 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-05 02:04:10.697601 | orchestrator | Thursday 05 February 2026 02:02:36 +0000 (0:00:01.072) 0:02:18.747 ***** 2026-02-05 02:04:10.697608 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:04:10.697614 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:04:10.697621 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:04:10.697627 | orchestrator | 2026-02-05 02:04:10.697634 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-05 02:04:10.697641 | orchestrator | Thursday 05 February 2026 02:02:38 +0000 (0:00:01.696) 0:02:20.444 ***** 2026-02-05 02:04:10.697648 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:04:10.697655 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:04:10.697660 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:04:10.697666 | orchestrator | 2026-02-05 02:04:10.697672 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-05 02:04:10.697696 | orchestrator | 2026-02-05 02:04:10.697703 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-05 02:04:10.697710 | orchestrator | Thursday 05 February 2026 02:02:47 +0000 (0:00:09.544) 0:02:29.988 ***** 2026-02-05 02:04:10.697716 | orchestrator | ok: [testbed-manager] 2026-02-05 02:04:10.697723 | orchestrator | 2026-02-05 02:04:10.697728 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-05 02:04:10.697734 | orchestrator | Thursday 05 February 2026 02:02:48 +0000 (0:00:00.777) 0:02:30.765 ***** 2026-02-05 02:04:10.697740 | orchestrator | changed: [testbed-manager] 2026-02-05 02:04:10.697746 | orchestrator | 2026-02-05 02:04:10.697752 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-05 02:04:10.697759 | orchestrator | Thursday 05 February 2026 02:02:48 +0000 (0:00:00.421) 0:02:31.187 ***** 2026-02-05 02:04:10.697764 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-05 02:04:10.697771 | orchestrator | 2026-02-05 02:04:10.697778 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-05 02:04:10.697785 | orchestrator | Thursday 05 February 2026 02:02:49 +0000 (0:00:00.597) 0:02:31.784 ***** 2026-02-05 02:04:10.697792 | orchestrator | changed: [testbed-manager] 2026-02-05 02:04:10.697798 | orchestrator | 2026-02-05 02:04:10.697804 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-05 02:04:10.697810 | orchestrator | Thursday 05 February 2026 02:02:50 +0000 (0:00:01.067) 0:02:32.852 ***** 2026-02-05 02:04:10.697816 | orchestrator | changed: [testbed-manager] 2026-02-05 02:04:10.697822 | orchestrator | 2026-02-05 02:04:10.697828 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-05 02:04:10.697835 | orchestrator | Thursday 05 February 2026 02:02:51 +0000 (0:00:00.572) 0:02:33.425 ***** 2026-02-05 02:04:10.697840 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 02:04:10.697847 | orchestrator | 2026-02-05 02:04:10.697852 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-05 02:04:10.697858 | orchestrator | Thursday 05 February 2026 02:02:52 +0000 (0:00:01.497) 0:02:34.923 ***** 2026-02-05 02:04:10.697864 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 02:04:10.697870 | orchestrator | 2026-02-05 02:04:10.697890 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-05 02:04:10.697896 | orchestrator | Thursday 05 February 2026 02:02:53 +0000 (0:00:00.791) 0:02:35.714 ***** 2026-02-05 02:04:10.697902 | orchestrator | changed: [testbed-manager] 2026-02-05 02:04:10.697909 | orchestrator | 2026-02-05 02:04:10.697915 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-05 02:04:10.697922 | orchestrator | Thursday 05 February 2026 02:02:53 +0000 (0:00:00.422) 0:02:36.136 ***** 2026-02-05 02:04:10.697929 | orchestrator | changed: [testbed-manager] 2026-02-05 02:04:10.697936 | orchestrator | 2026-02-05 02:04:10.697943 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-05 02:04:10.697950 | orchestrator | 2026-02-05 02:04:10.697957 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-05 02:04:10.697965 | orchestrator | Thursday 05 February 2026 02:02:54 +0000 (0:00:00.440) 0:02:36.577 ***** 2026-02-05 02:04:10.697972 | orchestrator | ok: [testbed-manager] 2026-02-05 02:04:10.697978 | orchestrator | 2026-02-05 02:04:10.697985 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-05 02:04:10.697992 | orchestrator | Thursday 05 February 2026 02:02:54 +0000 (0:00:00.139) 0:02:36.717 ***** 2026-02-05 02:04:10.697998 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 02:04:10.698005 | orchestrator | 2026-02-05 02:04:10.698011 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-05 02:04:10.698056 | orchestrator | Thursday 05 February 2026 02:02:54 +0000 (0:00:00.212) 0:02:36.929 ***** 2026-02-05 02:04:10.698063 | orchestrator | ok: [testbed-manager] 2026-02-05 02:04:10.698070 | orchestrator | 2026-02-05 02:04:10.698085 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-05 02:04:10.698093 | orchestrator | Thursday 05 February 2026 02:02:55 +0000 (0:00:01.195) 0:02:38.125 ***** 2026-02-05 02:04:10.698098 | orchestrator | ok: [testbed-manager] 2026-02-05 02:04:10.698104 | orchestrator | 2026-02-05 02:04:10.698125 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-05 02:04:10.698132 | orchestrator | Thursday 05 February 2026 02:02:57 +0000 (0:00:01.477) 0:02:39.603 ***** 2026-02-05 02:04:10.698138 | orchestrator | changed: [testbed-manager] 2026-02-05 02:04:10.698145 | orchestrator | 2026-02-05 02:04:10.698151 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-05 02:04:10.698158 | orchestrator | Thursday 05 February 2026 02:02:58 +0000 (0:00:00.800) 0:02:40.404 ***** 2026-02-05 02:04:10.698164 | orchestrator | ok: [testbed-manager] 2026-02-05 02:04:10.698170 | orchestrator | 2026-02-05 02:04:10.698177 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-05 02:04:10.698184 | orchestrator | Thursday 05 February 2026 02:02:58 +0000 (0:00:00.467) 0:02:40.872 ***** 2026-02-05 02:04:10.698191 | orchestrator | changed: [testbed-manager] 2026-02-05 02:04:10.698198 | orchestrator | 2026-02-05 02:04:10.698205 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-05 02:04:10.698211 | orchestrator | Thursday 05 February 2026 02:03:05 +0000 (0:00:07.097) 0:02:47.969 ***** 2026-02-05 02:04:10.698219 | orchestrator | changed: [testbed-manager] 2026-02-05 02:04:10.698226 | orchestrator | 2026-02-05 02:04:10.698232 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-05 02:04:10.698238 | orchestrator | Thursday 05 February 2026 02:03:17 +0000 (0:00:12.030) 0:03:00.000 ***** 2026-02-05 02:04:10.698245 | orchestrator | ok: [testbed-manager] 2026-02-05 02:04:10.698251 | orchestrator | 2026-02-05 02:04:10.698258 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-05 02:04:10.698264 | orchestrator | 2026-02-05 02:04:10.698270 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-05 02:04:10.698276 | orchestrator | Thursday 05 February 2026 02:03:18 +0000 (0:00:00.550) 0:03:00.551 ***** 2026-02-05 02:04:10.698282 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:04:10.698289 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:04:10.698295 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:04:10.698301 | orchestrator | 2026-02-05 02:04:10.698309 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-05 02:04:10.698315 | orchestrator | Thursday 05 February 2026 02:03:18 +0000 (0:00:00.569) 0:03:01.120 ***** 2026-02-05 02:04:10.698321 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:04:10.698328 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:04:10.698334 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:04:10.698341 | orchestrator | 2026-02-05 02:04:10.698348 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-05 02:04:10.698355 | orchestrator | Thursday 05 February 2026 02:03:19 +0000 (0:00:00.357) 0:03:01.478 ***** 2026-02-05 02:04:10.698361 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:04:10.698368 | orchestrator | 2026-02-05 02:04:10.698375 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-05 02:04:10.698381 | orchestrator | Thursday 05 February 2026 02:03:19 +0000 (0:00:00.515) 0:03:01.994 ***** 2026-02-05 02:04:10.698388 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 02:04:10.698394 | orchestrator | 2026-02-05 02:04:10.698400 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-05 02:04:10.698407 | orchestrator | Thursday 05 February 2026 02:03:20 +0000 (0:00:00.852) 0:03:02.846 ***** 2026-02-05 02:04:10.698413 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 02:04:10.698419 | orchestrator | 2026-02-05 02:04:10.698424 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-05 02:04:10.698436 | orchestrator | Thursday 05 February 2026 02:03:21 +0000 (0:00:00.837) 0:03:03.683 ***** 2026-02-05 02:04:10.698441 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:04:10.698448 | orchestrator | 2026-02-05 02:04:10.698454 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-05 02:04:10.698461 | orchestrator | Thursday 05 February 2026 02:03:21 +0000 (0:00:00.119) 0:03:03.803 ***** 2026-02-05 02:04:10.698468 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 02:04:10.698475 | orchestrator | 2026-02-05 02:04:10.698482 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-05 02:04:10.698497 | orchestrator | Thursday 05 February 2026 02:03:22 +0000 (0:00:01.340) 0:03:05.143 ***** 2026-02-05 02:04:10.698502 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:04:10.698508 | orchestrator | 2026-02-05 02:04:10.698514 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-05 02:04:10.698520 | orchestrator | Thursday 05 February 2026 02:03:23 +0000 (0:00:00.124) 0:03:05.268 ***** 2026-02-05 02:04:10.698528 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:04:10.698534 | orchestrator | 2026-02-05 02:04:10.698541 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-05 02:04:10.698547 | orchestrator | Thursday 05 February 2026 02:03:23 +0000 (0:00:00.119) 0:03:05.387 ***** 2026-02-05 02:04:10.698554 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:04:10.698561 | orchestrator | 2026-02-05 02:04:10.698567 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-05 02:04:10.698578 | orchestrator | Thursday 05 February 2026 02:03:23 +0000 (0:00:00.123) 0:03:05.511 ***** 2026-02-05 02:04:10.698585 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:04:10.698591 | orchestrator | 2026-02-05 02:04:10.698597 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-05 02:04:10.698603 | orchestrator | Thursday 05 February 2026 02:03:23 +0000 (0:00:00.115) 0:03:05.626 ***** 2026-02-05 02:04:10.698609 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 02:04:10.698615 | orchestrator | 2026-02-05 02:04:10.698622 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-05 02:04:10.698628 | orchestrator | Thursday 05 February 2026 02:03:28 +0000 (0:00:05.050) 0:03:10.676 ***** 2026-02-05 02:04:10.698634 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-05 02:04:10.698640 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-05 02:04:10.698655 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-05 02:04:32.693008 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-05 02:04:32.693114 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-05 02:04:32.693131 | orchestrator | 2026-02-05 02:04:32.693143 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-05 02:04:32.693150 | orchestrator | Thursday 05 February 2026 02:04:10 +0000 (0:00:42.220) 0:03:52.897 ***** 2026-02-05 02:04:32.693156 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 02:04:32.693163 | orchestrator | 2026-02-05 02:04:32.693169 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-05 02:04:32.693177 | orchestrator | Thursday 05 February 2026 02:04:11 +0000 (0:00:01.192) 0:03:54.090 ***** 2026-02-05 02:04:32.693183 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 02:04:32.693190 | orchestrator | 2026-02-05 02:04:32.693196 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-05 02:04:32.693203 | orchestrator | Thursday 05 February 2026 02:04:13 +0000 (0:00:01.515) 0:03:55.605 ***** 2026-02-05 02:04:32.693210 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 02:04:32.693216 | orchestrator | 2026-02-05 02:04:32.693223 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-05 02:04:32.693230 | orchestrator | Thursday 05 February 2026 02:04:14 +0000 (0:00:01.041) 0:03:56.647 ***** 2026-02-05 02:04:32.693258 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:04:32.693267 | orchestrator | 2026-02-05 02:04:32.693273 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-05 02:04:32.693279 | orchestrator | Thursday 05 February 2026 02:04:14 +0000 (0:00:00.131) 0:03:56.779 ***** 2026-02-05 02:04:32.693286 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-05 02:04:32.693294 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-05 02:04:32.693301 | orchestrator | 2026-02-05 02:04:32.693307 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-05 02:04:32.693314 | orchestrator | Thursday 05 February 2026 02:04:16 +0000 (0:00:01.781) 0:03:58.560 ***** 2026-02-05 02:04:32.693321 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:04:32.693328 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:04:32.693334 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:04:32.693340 | orchestrator | 2026-02-05 02:04:32.693346 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-05 02:04:32.693352 | orchestrator | Thursday 05 February 2026 02:04:16 +0000 (0:00:00.521) 0:03:59.081 ***** 2026-02-05 02:04:32.693359 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:04:32.693366 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:04:32.693372 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:04:32.693378 | orchestrator | 2026-02-05 02:04:32.693385 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-05 02:04:32.693391 | orchestrator | 2026-02-05 02:04:32.693398 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-05 02:04:32.693404 | orchestrator | Thursday 05 February 2026 02:04:17 +0000 (0:00:00.847) 0:03:59.929 ***** 2026-02-05 02:04:32.693410 | orchestrator | ok: [testbed-manager] 2026-02-05 02:04:32.693417 | orchestrator | 2026-02-05 02:04:32.693423 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-05 02:04:32.693430 | orchestrator | Thursday 05 February 2026 02:04:17 +0000 (0:00:00.141) 0:04:00.070 ***** 2026-02-05 02:04:32.693437 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 02:04:32.693444 | orchestrator | 2026-02-05 02:04:32.693450 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-05 02:04:32.693456 | orchestrator | Thursday 05 February 2026 02:04:18 +0000 (0:00:00.252) 0:04:00.323 ***** 2026-02-05 02:04:32.693463 | orchestrator | changed: [testbed-manager] 2026-02-05 02:04:32.693469 | orchestrator | 2026-02-05 02:04:32.693475 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-05 02:04:32.693523 | orchestrator | 2026-02-05 02:04:32.693532 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-05 02:04:32.693537 | orchestrator | Thursday 05 February 2026 02:04:23 +0000 (0:00:05.378) 0:04:05.701 ***** 2026-02-05 02:04:32.693543 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:04:32.693549 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:04:32.693556 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:04:32.693562 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:04:32.693568 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:04:32.693575 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:04:32.693581 | orchestrator | 2026-02-05 02:04:32.693587 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-05 02:04:32.693593 | orchestrator | Thursday 05 February 2026 02:04:24 +0000 (0:00:00.573) 0:04:06.275 ***** 2026-02-05 02:04:32.693600 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-05 02:04:32.693606 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-05 02:04:32.693612 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-05 02:04:32.693619 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-05 02:04:32.693637 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-05 02:04:32.693643 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-05 02:04:32.693649 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-05 02:04:32.693656 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-05 02:04:32.693662 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-05 02:04:32.693688 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-05 02:04:32.693695 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-05 02:04:32.693703 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-05 02:04:32.693709 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-05 02:04:32.693715 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-05 02:04:32.693722 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-05 02:04:32.693746 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-05 02:04:32.693753 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-05 02:04:32.693759 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-05 02:04:32.693765 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-05 02:04:32.693772 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-05 02:04:32.693778 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-05 02:04:32.693784 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-05 02:04:32.693791 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-05 02:04:32.693798 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-05 02:04:32.693804 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-05 02:04:32.693811 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-05 02:04:32.693817 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-05 02:04:32.693823 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-05 02:04:32.693830 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-05 02:04:32.693836 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-05 02:04:32.693842 | orchestrator | 2026-02-05 02:04:32.693849 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-05 02:04:32.693855 | orchestrator | Thursday 05 February 2026 02:04:31 +0000 (0:00:07.665) 0:04:13.941 ***** 2026-02-05 02:04:32.693862 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:04:32.693868 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:04:32.693874 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:04:32.693881 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:04:32.693887 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:04:32.693893 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:04:32.693899 | orchestrator | 2026-02-05 02:04:32.693905 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-05 02:04:32.693912 | orchestrator | Thursday 05 February 2026 02:04:32 +0000 (0:00:00.510) 0:04:14.451 ***** 2026-02-05 02:04:32.693918 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:04:32.693931 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:04:32.693938 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:04:32.693944 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:04:32.693950 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:04:32.693957 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:04:32.693964 | orchestrator | 2026-02-05 02:04:32.693971 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:04:32.693978 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:04:32.693988 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-05 02:04:32.693996 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-05 02:04:32.694003 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-05 02:04:32.694010 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 02:04:32.694096 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 02:04:32.694103 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 02:04:32.694110 | orchestrator | 2026-02-05 02:04:32.694115 | orchestrator | 2026-02-05 02:04:32.694121 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:04:32.694128 | orchestrator | Thursday 05 February 2026 02:04:32 +0000 (0:00:00.431) 0:04:14.883 ***** 2026-02-05 02:04:32.694145 | orchestrator | =============================================================================== 2026-02-05 02:04:33.092347 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.26s 2026-02-05 02:04:33.092442 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.22s 2026-02-05 02:04:33.092455 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.60s 2026-02-05 02:04:33.092463 | orchestrator | kubectl : Install required packages ------------------------------------ 12.03s 2026-02-05 02:04:33.092471 | orchestrator | k3s_download : Download k3s binary x64 --------------------------------- 10.85s 2026-02-05 02:04:33.092478 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.54s 2026-02-05 02:04:33.092531 | orchestrator | Manage labels ----------------------------------------------------------- 7.67s 2026-02-05 02:04:33.092538 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.10s 2026-02-05 02:04:33.092544 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.38s 2026-02-05 02:04:33.092551 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.05s 2026-02-05 02:04:33.092559 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.97s 2026-02-05 02:04:33.092567 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.44s 2026-02-05 02:04:33.092574 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.24s 2026-02-05 02:04:33.092581 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.78s 2026-02-05 02:04:33.092588 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.70s 2026-02-05 02:04:33.092594 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.60s 2026-02-05 02:04:33.092601 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.56s 2026-02-05 02:04:33.092632 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.54s 2026-02-05 02:04:33.092639 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.52s 2026-02-05 02:04:33.092646 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.50s 2026-02-05 02:04:33.391885 | orchestrator | + osism apply copy-kubeconfig 2026-02-05 02:04:45.404882 | orchestrator | 2026-02-05 02:04:45 | INFO  | Task 755b7dc0-d5ce-4ba3-80e8-b5e1da953dc3 (copy-kubeconfig) was prepared for execution. 2026-02-05 02:04:45.404980 | orchestrator | 2026-02-05 02:04:45 | INFO  | It takes a moment until task 755b7dc0-d5ce-4ba3-80e8-b5e1da953dc3 (copy-kubeconfig) has been started and output is visible here. 2026-02-05 02:04:52.320461 | orchestrator | 2026-02-05 02:04:52.320542 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-05 02:04:52.320549 | orchestrator | 2026-02-05 02:04:52.320554 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-05 02:04:52.320558 | orchestrator | Thursday 05 February 2026 02:04:49 +0000 (0:00:00.155) 0:00:00.155 ***** 2026-02-05 02:04:52.320562 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-05 02:04:52.320566 | orchestrator | 2026-02-05 02:04:52.320570 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-05 02:04:52.320574 | orchestrator | Thursday 05 February 2026 02:04:50 +0000 (0:00:00.773) 0:00:00.928 ***** 2026-02-05 02:04:52.320588 | orchestrator | changed: [testbed-manager] 2026-02-05 02:04:52.320593 | orchestrator | 2026-02-05 02:04:52.320597 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-05 02:04:52.320601 | orchestrator | Thursday 05 February 2026 02:04:51 +0000 (0:00:01.191) 0:00:02.119 ***** 2026-02-05 02:04:52.320610 | orchestrator | changed: [testbed-manager] 2026-02-05 02:04:52.320615 | orchestrator | 2026-02-05 02:04:52.320621 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:04:52.320625 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:04:52.320629 | orchestrator | 2026-02-05 02:04:52.320633 | orchestrator | 2026-02-05 02:04:52.320637 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:04:52.320641 | orchestrator | Thursday 05 February 2026 02:04:52 +0000 (0:00:00.477) 0:00:02.597 ***** 2026-02-05 02:04:52.320645 | orchestrator | =============================================================================== 2026-02-05 02:04:52.320649 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.19s 2026-02-05 02:04:52.320653 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.77s 2026-02-05 02:04:52.320657 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.48s 2026-02-05 02:04:52.623154 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-05 02:05:04.729525 | orchestrator | 2026-02-05 02:05:04 | INFO  | Task 4f23e7ec-48db-4c51-a89d-0f405eae105e (openstackclient) was prepared for execution. 2026-02-05 02:05:04.729625 | orchestrator | 2026-02-05 02:05:04 | INFO  | It takes a moment until task 4f23e7ec-48db-4c51-a89d-0f405eae105e (openstackclient) has been started and output is visible here. 2026-02-05 02:05:51.509155 | orchestrator | 2026-02-05 02:05:51.509228 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-05 02:05:51.509238 | orchestrator | 2026-02-05 02:05:51.509244 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-05 02:05:51.509251 | orchestrator | Thursday 05 February 2026 02:05:08 +0000 (0:00:00.226) 0:00:00.226 ***** 2026-02-05 02:05:51.509258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-05 02:05:51.509264 | orchestrator | 2026-02-05 02:05:51.509285 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-05 02:05:51.509291 | orchestrator | Thursday 05 February 2026 02:05:09 +0000 (0:00:00.221) 0:00:00.448 ***** 2026-02-05 02:05:51.509297 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-05 02:05:51.509304 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-05 02:05:51.509310 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-05 02:05:51.509315 | orchestrator | 2026-02-05 02:05:51.509321 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-05 02:05:51.509327 | orchestrator | Thursday 05 February 2026 02:05:10 +0000 (0:00:01.247) 0:00:01.696 ***** 2026-02-05 02:05:51.509333 | orchestrator | changed: [testbed-manager] 2026-02-05 02:05:51.509339 | orchestrator | 2026-02-05 02:05:51.509345 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-05 02:05:51.509351 | orchestrator | Thursday 05 February 2026 02:05:11 +0000 (0:00:01.390) 0:00:03.086 ***** 2026-02-05 02:05:51.509356 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-05 02:05:51.509362 | orchestrator | ok: [testbed-manager] 2026-02-05 02:05:51.509369 | orchestrator | 2026-02-05 02:05:51.509374 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-05 02:05:51.509380 | orchestrator | Thursday 05 February 2026 02:05:46 +0000 (0:00:34.711) 0:00:37.798 ***** 2026-02-05 02:05:51.509385 | orchestrator | changed: [testbed-manager] 2026-02-05 02:05:51.509391 | orchestrator | 2026-02-05 02:05:51.509401 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-05 02:05:51.509408 | orchestrator | Thursday 05 February 2026 02:05:47 +0000 (0:00:00.866) 0:00:38.664 ***** 2026-02-05 02:05:51.509414 | orchestrator | ok: [testbed-manager] 2026-02-05 02:05:51.509420 | orchestrator | 2026-02-05 02:05:51.509426 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-05 02:05:51.509432 | orchestrator | Thursday 05 February 2026 02:05:48 +0000 (0:00:00.629) 0:00:39.294 ***** 2026-02-05 02:05:51.509438 | orchestrator | changed: [testbed-manager] 2026-02-05 02:05:51.509456 | orchestrator | 2026-02-05 02:05:51.509462 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-05 02:05:51.509468 | orchestrator | Thursday 05 February 2026 02:05:49 +0000 (0:00:01.492) 0:00:40.786 ***** 2026-02-05 02:05:51.509475 | orchestrator | changed: [testbed-manager] 2026-02-05 02:05:51.509481 | orchestrator | 2026-02-05 02:05:51.509487 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-05 02:05:51.509493 | orchestrator | Thursday 05 February 2026 02:05:50 +0000 (0:00:00.677) 0:00:41.464 ***** 2026-02-05 02:05:51.509499 | orchestrator | changed: [testbed-manager] 2026-02-05 02:05:51.509505 | orchestrator | 2026-02-05 02:05:51.509511 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-05 02:05:51.509516 | orchestrator | Thursday 05 February 2026 02:05:50 +0000 (0:00:00.551) 0:00:42.016 ***** 2026-02-05 02:05:51.509522 | orchestrator | ok: [testbed-manager] 2026-02-05 02:05:51.509528 | orchestrator | 2026-02-05 02:05:51.509534 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:05:51.509540 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:05:51.509547 | orchestrator | 2026-02-05 02:05:51.509553 | orchestrator | 2026-02-05 02:05:51.509559 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:05:51.509565 | orchestrator | Thursday 05 February 2026 02:05:51 +0000 (0:00:00.383) 0:00:42.399 ***** 2026-02-05 02:05:51.509571 | orchestrator | =============================================================================== 2026-02-05 02:05:51.509577 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.71s 2026-02-05 02:05:51.509583 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.49s 2026-02-05 02:05:51.509598 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.39s 2026-02-05 02:05:51.509604 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.25s 2026-02-05 02:05:51.509610 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.87s 2026-02-05 02:05:51.509616 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.68s 2026-02-05 02:05:51.509622 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.63s 2026-02-05 02:05:51.509628 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.55s 2026-02-05 02:05:51.509634 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.38s 2026-02-05 02:05:51.509640 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.22s 2026-02-05 02:05:53.721562 | orchestrator | 2026-02-05 02:05:53 | INFO  | Task 84c4f41f-cc3e-4df2-a41d-58a3c5a89e3b (common) was prepared for execution. 2026-02-05 02:05:53.721641 | orchestrator | 2026-02-05 02:05:53 | INFO  | It takes a moment until task 84c4f41f-cc3e-4df2-a41d-58a3c5a89e3b (common) has been started and output is visible here. 2026-02-05 02:06:04.455082 | orchestrator | 2026-02-05 02:06:04.455144 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-05 02:06:04.455153 | orchestrator | 2026-02-05 02:06:04.455160 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-05 02:06:04.455168 | orchestrator | Thursday 05 February 2026 02:05:57 +0000 (0:00:00.203) 0:00:00.203 ***** 2026-02-05 02:06:04.455175 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:06:04.455183 | orchestrator | 2026-02-05 02:06:04.455190 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-05 02:06:04.455197 | orchestrator | Thursday 05 February 2026 02:05:58 +0000 (0:00:00.936) 0:00:01.140 ***** 2026-02-05 02:06:04.455203 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 02:06:04.455210 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 02:06:04.455217 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 02:06:04.455224 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 02:06:04.455231 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 02:06:04.455237 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 02:06:04.455244 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 02:06:04.455250 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 02:06:04.455257 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 02:06:04.455274 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 02:06:04.455281 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 02:06:04.455288 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 02:06:04.455295 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 02:06:04.455301 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 02:06:04.455307 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 02:06:04.455314 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 02:06:04.455320 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 02:06:04.455340 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 02:06:04.455347 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 02:06:04.455353 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 02:06:04.455359 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 02:06:04.455364 | orchestrator | 2026-02-05 02:06:04.455370 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-05 02:06:04.455376 | orchestrator | Thursday 05 February 2026 02:06:00 +0000 (0:00:02.349) 0:00:03.489 ***** 2026-02-05 02:06:04.455382 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:06:04.455389 | orchestrator | 2026-02-05 02:06:04.455395 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-05 02:06:04.455404 | orchestrator | Thursday 05 February 2026 02:06:01 +0000 (0:00:01.167) 0:00:04.656 ***** 2026-02-05 02:06:04.455412 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:04.455419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:04.455500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:04.455510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:04.455517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:04.455524 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:04.455542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:04.455550 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:04.455557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:04.455574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:05.518336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:05.518382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:05.518398 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:05.518403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:05.518409 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:05.518420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:05.518426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:05.518456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:05.518462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:05.518467 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:05.518475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:05.518480 | orchestrator | 2026-02-05 02:06:05.518486 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-05 02:06:05.518491 | orchestrator | Thursday 05 February 2026 02:06:05 +0000 (0:00:03.211) 0:00:07.867 ***** 2026-02-05 02:06:05.518497 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:05.518502 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:05.518507 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:05.518512 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:06:05.518518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:05.518528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.071466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.071537 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:06:06.071567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:06.071576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.071583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.071590 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:06:06.071596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:06.071606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.071612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.071618 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:06:06.071636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:06.071648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.071655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.071662 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:06:06.071668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:06.071675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.071682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.071689 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:06:06.071696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:06.071704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.872204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.873121 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:06:06.873164 | orchestrator | 2026-02-05 02:06:06.873172 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-05 02:06:06.873178 | orchestrator | Thursday 05 February 2026 02:06:06 +0000 (0:00:00.852) 0:00:08.720 ***** 2026-02-05 02:06:06.873184 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:06.873191 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.873197 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.873219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:06.873227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.873244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.873248 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:06:06.873252 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:06:06.873276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:06.873281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.873285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.873289 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:06:06.873293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:06.873297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.873305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:06.873324 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:06:06.873339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:06.873358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:11.950354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:11.950484 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:06:11.950502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:11.950513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:11.950520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:11.950527 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:06:11.950534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 02:06:11.950563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:11.950569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:11.950575 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:06:11.950582 | orchestrator | 2026-02-05 02:06:11.950589 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-05 02:06:11.950598 | orchestrator | Thursday 05 February 2026 02:06:07 +0000 (0:00:01.707) 0:00:10.427 ***** 2026-02-05 02:06:11.950604 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:06:11.950611 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:06:11.950617 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:06:11.950621 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:06:11.950640 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:06:11.950647 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:06:11.950653 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:06:11.950660 | orchestrator | 2026-02-05 02:06:11.950668 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-05 02:06:11.950673 | orchestrator | Thursday 05 February 2026 02:06:08 +0000 (0:00:00.711) 0:00:11.138 ***** 2026-02-05 02:06:11.950679 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:06:11.950685 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:06:11.950691 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:06:11.950698 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:06:11.950704 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:06:11.950711 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:06:11.950717 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:06:11.950724 | orchestrator | 2026-02-05 02:06:11.950728 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-05 02:06:11.950734 | orchestrator | Thursday 05 February 2026 02:06:09 +0000 (0:00:00.813) 0:00:11.952 ***** 2026-02-05 02:06:11.950741 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:11.950765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:11.950781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:11.950791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:11.950797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:11.950803 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:11.950820 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771387 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:14.771542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771585 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771594 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771611 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:14.771648 | orchestrator | 2026-02-05 02:06:14.771654 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-05 02:06:14.771659 | orchestrator | Thursday 05 February 2026 02:06:13 +0000 (0:00:03.755) 0:00:15.708 ***** 2026-02-05 02:06:14.771663 | orchestrator | [WARNING]: Skipped 2026-02-05 02:06:14.771668 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-05 02:06:14.771674 | orchestrator | to this access issue: 2026-02-05 02:06:14.771678 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-05 02:06:14.771682 | orchestrator | directory 2026-02-05 02:06:14.771686 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 02:06:14.771691 | orchestrator | 2026-02-05 02:06:14.771694 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-05 02:06:14.771698 | orchestrator | Thursday 05 February 2026 02:06:14 +0000 (0:00:01.002) 0:00:16.710 ***** 2026-02-05 02:06:14.771702 | orchestrator | [WARNING]: Skipped 2026-02-05 02:06:14.771706 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-05 02:06:14.771710 | orchestrator | to this access issue: 2026-02-05 02:06:14.771716 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-05 02:06:24.171178 | orchestrator | directory 2026-02-05 02:06:24.171254 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 02:06:24.171262 | orchestrator | 2026-02-05 02:06:24.171267 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-05 02:06:24.171273 | orchestrator | Thursday 05 February 2026 02:06:15 +0000 (0:00:00.964) 0:00:17.674 ***** 2026-02-05 02:06:24.171293 | orchestrator | [WARNING]: Skipped 2026-02-05 02:06:24.171298 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-05 02:06:24.171304 | orchestrator | to this access issue: 2026-02-05 02:06:24.171308 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-05 02:06:24.171312 | orchestrator | directory 2026-02-05 02:06:24.171316 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 02:06:24.171320 | orchestrator | 2026-02-05 02:06:24.171324 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-05 02:06:24.171328 | orchestrator | Thursday 05 February 2026 02:06:15 +0000 (0:00:00.834) 0:00:18.509 ***** 2026-02-05 02:06:24.171331 | orchestrator | [WARNING]: Skipped 2026-02-05 02:06:24.171335 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-05 02:06:24.171339 | orchestrator | to this access issue: 2026-02-05 02:06:24.171343 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-05 02:06:24.171347 | orchestrator | directory 2026-02-05 02:06:24.171351 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 02:06:24.171354 | orchestrator | 2026-02-05 02:06:24.171358 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-05 02:06:24.171362 | orchestrator | Thursday 05 February 2026 02:06:16 +0000 (0:00:00.841) 0:00:19.350 ***** 2026-02-05 02:06:24.171366 | orchestrator | changed: [testbed-manager] 2026-02-05 02:06:24.171370 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:06:24.171374 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:06:24.171377 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:06:24.171381 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:06:24.171385 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:06:24.171402 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:06:24.171406 | orchestrator | 2026-02-05 02:06:24.171410 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-05 02:06:24.171414 | orchestrator | Thursday 05 February 2026 02:06:19 +0000 (0:00:02.441) 0:00:21.791 ***** 2026-02-05 02:06:24.171418 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 02:06:24.171423 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 02:06:24.171427 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 02:06:24.171458 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 02:06:24.171464 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 02:06:24.171470 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 02:06:24.171479 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 02:06:24.171484 | orchestrator | 2026-02-05 02:06:24.171488 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-05 02:06:24.171492 | orchestrator | Thursday 05 February 2026 02:06:21 +0000 (0:00:01.916) 0:00:23.708 ***** 2026-02-05 02:06:24.171496 | orchestrator | changed: [testbed-manager] 2026-02-05 02:06:24.171500 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:06:24.171503 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:06:24.171509 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:06:24.171515 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:06:24.171521 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:06:24.171527 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:06:24.171533 | orchestrator | 2026-02-05 02:06:24.171539 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-05 02:06:24.171550 | orchestrator | Thursday 05 February 2026 02:06:22 +0000 (0:00:01.901) 0:00:25.609 ***** 2026-02-05 02:06:24.171559 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:24.171579 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:24.171586 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:24.171593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:24.171599 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:24.171609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:24.171617 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:24.171632 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:24.171639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:24.171651 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:30.428728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:30.428820 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:30.428831 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:30.428848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:30.428869 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:30.428874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:06:30.428878 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:30.428905 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:30.428912 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:30.428918 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:30.428924 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:30.428930 | orchestrator | 2026-02-05 02:06:30.428938 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-05 02:06:30.428947 | orchestrator | Thursday 05 February 2026 02:06:24 +0000 (0:00:01.732) 0:00:27.342 ***** 2026-02-05 02:06:30.428970 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 02:06:30.428982 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 02:06:30.428995 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 02:06:30.429001 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 02:06:30.429008 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 02:06:30.429014 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 02:06:30.429020 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 02:06:30.429026 | orchestrator | 2026-02-05 02:06:30.429033 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-05 02:06:30.429039 | orchestrator | Thursday 05 February 2026 02:06:26 +0000 (0:00:02.012) 0:00:29.355 ***** 2026-02-05 02:06:30.429046 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 02:06:30.429053 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 02:06:30.429057 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 02:06:30.429066 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 02:06:30.429070 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 02:06:30.429074 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 02:06:30.429078 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 02:06:30.429082 | orchestrator | 2026-02-05 02:06:30.429085 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-05 02:06:30.429089 | orchestrator | Thursday 05 February 2026 02:06:28 +0000 (0:00:01.645) 0:00:31.000 ***** 2026-02-05 02:06:30.429093 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:30.429105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:31.187859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:31.187943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:31.187991 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:31.187999 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:31.188005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:31.188012 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 02:06:31.188019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:31.188040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:31.188047 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:31.188062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:31.188072 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:31.188079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:31.188084 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:31.188090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:06:31.188103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:07:38.090082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:07:38.090211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:07:38.090222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:07:38.090244 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:07:38.090253 | orchestrator | 2026-02-05 02:07:38.090263 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-05 02:07:38.090272 | orchestrator | Thursday 05 February 2026 02:06:31 +0000 (0:00:02.838) 0:00:33.838 ***** 2026-02-05 02:07:38.090280 | orchestrator | changed: [testbed-manager] 2026-02-05 02:07:38.090290 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:07:38.090297 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:07:38.090305 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:07:38.090312 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:07:38.090320 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:07:38.090327 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:07:38.090335 | orchestrator | 2026-02-05 02:07:38.090343 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-05 02:07:38.090352 | orchestrator | Thursday 05 February 2026 02:06:32 +0000 (0:00:01.451) 0:00:35.289 ***** 2026-02-05 02:07:38.090359 | orchestrator | changed: [testbed-manager] 2026-02-05 02:07:38.090367 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:07:38.090376 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:07:38.090384 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:07:38.090391 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:07:38.090477 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:07:38.090485 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:07:38.090492 | orchestrator | 2026-02-05 02:07:38.090500 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 02:07:38.090507 | orchestrator | Thursday 05 February 2026 02:06:33 +0000 (0:00:01.089) 0:00:36.379 ***** 2026-02-05 02:07:38.090513 | orchestrator | 2026-02-05 02:07:38.090520 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 02:07:38.090528 | orchestrator | Thursday 05 February 2026 02:06:33 +0000 (0:00:00.065) 0:00:36.445 ***** 2026-02-05 02:07:38.090535 | orchestrator | 2026-02-05 02:07:38.090542 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 02:07:38.090549 | orchestrator | Thursday 05 February 2026 02:06:33 +0000 (0:00:00.063) 0:00:36.509 ***** 2026-02-05 02:07:38.090557 | orchestrator | 2026-02-05 02:07:38.090564 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 02:07:38.090572 | orchestrator | Thursday 05 February 2026 02:06:33 +0000 (0:00:00.069) 0:00:36.578 ***** 2026-02-05 02:07:38.090580 | orchestrator | 2026-02-05 02:07:38.090588 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 02:07:38.090603 | orchestrator | Thursday 05 February 2026 02:06:33 +0000 (0:00:00.062) 0:00:36.640 ***** 2026-02-05 02:07:38.090611 | orchestrator | 2026-02-05 02:07:38.090619 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 02:07:38.090627 | orchestrator | Thursday 05 February 2026 02:06:34 +0000 (0:00:00.214) 0:00:36.855 ***** 2026-02-05 02:07:38.090635 | orchestrator | 2026-02-05 02:07:38.090645 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 02:07:38.090652 | orchestrator | Thursday 05 February 2026 02:06:34 +0000 (0:00:00.058) 0:00:36.913 ***** 2026-02-05 02:07:38.090659 | orchestrator | 2026-02-05 02:07:38.090666 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-05 02:07:38.090673 | orchestrator | Thursday 05 February 2026 02:06:34 +0000 (0:00:00.088) 0:00:37.002 ***** 2026-02-05 02:07:38.090680 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:07:38.090688 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:07:38.090695 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:07:38.090702 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:07:38.090709 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:07:38.090733 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:07:38.090741 | orchestrator | changed: [testbed-manager] 2026-02-05 02:07:38.090749 | orchestrator | 2026-02-05 02:07:38.090756 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-05 02:07:38.090763 | orchestrator | Thursday 05 February 2026 02:07:01 +0000 (0:00:27.617) 0:01:04.620 ***** 2026-02-05 02:07:38.090770 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:07:38.090776 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:07:38.090783 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:07:38.090789 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:07:38.090795 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:07:38.090801 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:07:38.090807 | orchestrator | changed: [testbed-manager] 2026-02-05 02:07:38.090813 | orchestrator | 2026-02-05 02:07:38.090820 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-05 02:07:38.090826 | orchestrator | Thursday 05 February 2026 02:07:26 +0000 (0:00:24.915) 0:01:29.535 ***** 2026-02-05 02:07:38.090833 | orchestrator | ok: [testbed-manager] 2026-02-05 02:07:38.090841 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:07:38.090847 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:07:38.090854 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:07:38.090860 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:07:38.090866 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:07:38.090872 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:07:38.090879 | orchestrator | 2026-02-05 02:07:38.090885 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-05 02:07:38.090892 | orchestrator | Thursday 05 February 2026 02:07:28 +0000 (0:00:02.074) 0:01:31.610 ***** 2026-02-05 02:07:38.090899 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:07:38.090905 | orchestrator | changed: [testbed-manager] 2026-02-05 02:07:38.090912 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:07:38.090919 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:07:38.090927 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:07:38.090934 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:07:38.090940 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:07:38.090946 | orchestrator | 2026-02-05 02:07:38.090953 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:07:38.090961 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 02:07:38.090969 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 02:07:38.090985 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 02:07:38.090999 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 02:07:38.091007 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 02:07:38.091013 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 02:07:38.091020 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 02:07:38.091027 | orchestrator | 2026-02-05 02:07:38.091034 | orchestrator | 2026-02-05 02:07:38.091041 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:07:38.091049 | orchestrator | Thursday 05 February 2026 02:07:38 +0000 (0:00:09.119) 0:01:40.729 ***** 2026-02-05 02:07:38.091055 | orchestrator | =============================================================================== 2026-02-05 02:07:38.091063 | orchestrator | common : Restart fluentd container ------------------------------------- 27.62s 2026-02-05 02:07:38.091069 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 24.92s 2026-02-05 02:07:38.091076 | orchestrator | common : Restart cron container ----------------------------------------- 9.12s 2026-02-05 02:07:38.091083 | orchestrator | common : Copying over config.json files for services -------------------- 3.76s 2026-02-05 02:07:38.091090 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.21s 2026-02-05 02:07:38.091098 | orchestrator | common : Check common containers ---------------------------------------- 2.84s 2026-02-05 02:07:38.091105 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.44s 2026-02-05 02:07:38.091112 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.35s 2026-02-05 02:07:38.091119 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.07s 2026-02-05 02:07:38.091126 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.01s 2026-02-05 02:07:38.091134 | orchestrator | common : Copying over cron logrotate config file ------------------------ 1.92s 2026-02-05 02:07:38.091141 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.90s 2026-02-05 02:07:38.091149 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.73s 2026-02-05 02:07:38.091156 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.71s 2026-02-05 02:07:38.091163 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.65s 2026-02-05 02:07:38.091170 | orchestrator | common : Creating log volume -------------------------------------------- 1.45s 2026-02-05 02:07:38.091184 | orchestrator | common : include_tasks -------------------------------------------------- 1.17s 2026-02-05 02:07:38.541991 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.09s 2026-02-05 02:07:38.542101 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.00s 2026-02-05 02:07:38.542112 | orchestrator | common : Find custom fluentd filter config files ------------------------ 0.96s 2026-02-05 02:07:41.110128 | orchestrator | 2026-02-05 02:07:41 | INFO  | Task 018638ca-2bee-4126-b0b1-9b92b80794d2 (loadbalancer) was prepared for execution. 2026-02-05 02:07:41.110216 | orchestrator | 2026-02-05 02:07:41 | INFO  | It takes a moment until task 018638ca-2bee-4126-b0b1-9b92b80794d2 (loadbalancer) has been started and output is visible here. 2026-02-05 02:07:56.441486 | orchestrator | 2026-02-05 02:07:56.441583 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 02:07:56.441597 | orchestrator | 2026-02-05 02:07:56.441604 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 02:07:56.441612 | orchestrator | Thursday 05 February 2026 02:07:45 +0000 (0:00:00.252) 0:00:00.252 ***** 2026-02-05 02:07:56.441642 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:07:56.441650 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:07:56.441655 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:07:56.441661 | orchestrator | 2026-02-05 02:07:56.441667 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 02:07:56.441672 | orchestrator | Thursday 05 February 2026 02:07:45 +0000 (0:00:00.289) 0:00:00.542 ***** 2026-02-05 02:07:56.441678 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-05 02:07:56.441684 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-05 02:07:56.441690 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-05 02:07:56.441696 | orchestrator | 2026-02-05 02:07:56.441701 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-05 02:07:56.441707 | orchestrator | 2026-02-05 02:07:56.441713 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-05 02:07:56.441732 | orchestrator | Thursday 05 February 2026 02:07:46 +0000 (0:00:00.441) 0:00:00.983 ***** 2026-02-05 02:07:56.441739 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:07:56.441745 | orchestrator | 2026-02-05 02:07:56.441750 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-05 02:07:56.441756 | orchestrator | Thursday 05 February 2026 02:07:46 +0000 (0:00:00.506) 0:00:01.490 ***** 2026-02-05 02:07:56.441762 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:07:56.441767 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:07:56.441773 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:07:56.441779 | orchestrator | 2026-02-05 02:07:56.441784 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-05 02:07:56.441789 | orchestrator | Thursday 05 February 2026 02:07:47 +0000 (0:00:00.689) 0:00:02.179 ***** 2026-02-05 02:07:56.441795 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:07:56.441800 | orchestrator | 2026-02-05 02:07:56.441806 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-05 02:07:56.441812 | orchestrator | Thursday 05 February 2026 02:07:47 +0000 (0:00:00.699) 0:00:02.879 ***** 2026-02-05 02:07:56.441817 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:07:56.441822 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:07:56.441828 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:07:56.441834 | orchestrator | 2026-02-05 02:07:56.441839 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-05 02:07:56.441845 | orchestrator | Thursday 05 February 2026 02:07:48 +0000 (0:00:00.682) 0:00:03.561 ***** 2026-02-05 02:07:56.441851 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-05 02:07:56.441857 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-05 02:07:56.441863 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-05 02:07:56.441869 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-05 02:07:56.441874 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-05 02:07:56.441880 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-05 02:07:56.441885 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-05 02:07:56.441893 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-05 02:07:56.441898 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-05 02:07:56.441904 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-05 02:07:56.441919 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-05 02:07:56.441924 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-05 02:07:56.441930 | orchestrator | 2026-02-05 02:07:56.441935 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-05 02:07:56.441942 | orchestrator | Thursday 05 February 2026 02:07:51 +0000 (0:00:03.023) 0:00:06.585 ***** 2026-02-05 02:07:56.441948 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-05 02:07:56.441955 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-05 02:07:56.441962 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-05 02:07:56.441968 | orchestrator | 2026-02-05 02:07:56.441974 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-05 02:07:56.441980 | orchestrator | Thursday 05 February 2026 02:07:52 +0000 (0:00:00.904) 0:00:07.489 ***** 2026-02-05 02:07:56.441986 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-05 02:07:56.441993 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-05 02:07:56.442000 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-05 02:07:56.442004 | orchestrator | 2026-02-05 02:07:56.442008 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-05 02:07:56.442053 | orchestrator | Thursday 05 February 2026 02:07:53 +0000 (0:00:01.389) 0:00:08.878 ***** 2026-02-05 02:07:56.442058 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-05 02:07:56.442063 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:07:56.442083 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-05 02:07:56.442088 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:07:56.442092 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-05 02:07:56.442097 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:07:56.442101 | orchestrator | 2026-02-05 02:07:56.442106 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-05 02:07:56.442110 | orchestrator | Thursday 05 February 2026 02:07:54 +0000 (0:00:00.481) 0:00:09.360 ***** 2026-02-05 02:07:56.442122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 02:07:56.442131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 02:07:56.442136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 02:07:56.442146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:07:56.442151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:07:56.442159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:01.757098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 02:08:01.757202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 02:08:01.757211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 02:08:01.757215 | orchestrator | 2026-02-05 02:08:01.757221 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-05 02:08:01.757227 | orchestrator | Thursday 05 February 2026 02:07:56 +0000 (0:00:01.972) 0:00:11.333 ***** 2026-02-05 02:08:01.757232 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:08:01.757258 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:08:01.757267 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:08:01.757273 | orchestrator | 2026-02-05 02:08:01.757280 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-05 02:08:01.757286 | orchestrator | Thursday 05 February 2026 02:07:57 +0000 (0:00:00.900) 0:00:12.233 ***** 2026-02-05 02:08:01.757292 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-05 02:08:01.757298 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-05 02:08:01.757304 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-05 02:08:01.757309 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-05 02:08:01.757315 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-05 02:08:01.757321 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-05 02:08:01.757327 | orchestrator | 2026-02-05 02:08:01.757332 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-05 02:08:01.757338 | orchestrator | Thursday 05 February 2026 02:07:58 +0000 (0:00:01.511) 0:00:13.744 ***** 2026-02-05 02:08:01.757344 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:08:01.757350 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:08:01.757357 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:08:01.757363 | orchestrator | 2026-02-05 02:08:01.757369 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-05 02:08:01.757375 | orchestrator | Thursday 05 February 2026 02:07:59 +0000 (0:00:00.865) 0:00:14.610 ***** 2026-02-05 02:08:01.757381 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:08:01.757438 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:08:01.757446 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:08:01.757452 | orchestrator | 2026-02-05 02:08:01.757459 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-05 02:08:01.757466 | orchestrator | Thursday 05 February 2026 02:08:01 +0000 (0:00:01.460) 0:00:16.070 ***** 2026-02-05 02:08:01.757473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 02:08:01.757499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:01.757504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:01.757510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6bd7a1c2e444389cef19283a26a6190eef921c81', '__omit_place_holder__6bd7a1c2e444389cef19283a26a6190eef921c81'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 02:08:01.757521 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:01.757526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 02:08:01.757565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:01.757576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:01.757583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6bd7a1c2e444389cef19283a26a6190eef921c81', '__omit_place_holder__6bd7a1c2e444389cef19283a26a6190eef921c81'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 02:08:01.757589 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:01.757602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 02:08:04.533731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:04.533844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:04.533854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6bd7a1c2e444389cef19283a26a6190eef921c81', '__omit_place_holder__6bd7a1c2e444389cef19283a26a6190eef921c81'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 02:08:04.533861 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:04.533870 | orchestrator | 2026-02-05 02:08:04.533877 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-05 02:08:04.533885 | orchestrator | Thursday 05 February 2026 02:08:01 +0000 (0:00:00.582) 0:00:16.653 ***** 2026-02-05 02:08:04.533891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 02:08:04.533898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 02:08:04.533905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 02:08:04.533955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:04.533963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:04.533970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6bd7a1c2e444389cef19283a26a6190eef921c81', '__omit_place_holder__6bd7a1c2e444389cef19283a26a6190eef921c81'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 02:08:04.533977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:04.533984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:04.533991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6bd7a1c2e444389cef19283a26a6190eef921c81', '__omit_place_holder__6bd7a1c2e444389cef19283a26a6190eef921c81'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 02:08:04.534060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:13.104228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:13.104311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6bd7a1c2e444389cef19283a26a6190eef921c81', '__omit_place_holder__6bd7a1c2e444389cef19283a26a6190eef921c81'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 02:08:13.104318 | orchestrator | 2026-02-05 02:08:13.104324 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-05 02:08:13.104329 | orchestrator | Thursday 05 February 2026 02:08:04 +0000 (0:00:02.765) 0:00:19.418 ***** 2026-02-05 02:08:13.104334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 02:08:13.104339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 02:08:13.104344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 02:08:13.104366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:13.104428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:13.104434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:13.104438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 02:08:13.104442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 02:08:13.104446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 02:08:13.104450 | orchestrator | 2026-02-05 02:08:13.104454 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-05 02:08:13.104458 | orchestrator | Thursday 05 February 2026 02:08:07 +0000 (0:00:03.212) 0:00:22.630 ***** 2026-02-05 02:08:13.104467 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-05 02:08:13.104472 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-05 02:08:13.104476 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-05 02:08:13.104480 | orchestrator | 2026-02-05 02:08:13.104484 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-05 02:08:13.104488 | orchestrator | Thursday 05 February 2026 02:08:09 +0000 (0:00:01.826) 0:00:24.457 ***** 2026-02-05 02:08:13.104492 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-05 02:08:13.104496 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-05 02:08:13.104500 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-05 02:08:13.104503 | orchestrator | 2026-02-05 02:08:13.104507 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-05 02:08:13.104511 | orchestrator | Thursday 05 February 2026 02:08:12 +0000 (0:00:02.950) 0:00:27.407 ***** 2026-02-05 02:08:13.104515 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:13.104521 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:13.104525 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:13.104529 | orchestrator | 2026-02-05 02:08:13.104536 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-05 02:08:24.930221 | orchestrator | Thursday 05 February 2026 02:08:13 +0000 (0:00:00.593) 0:00:28.001 ***** 2026-02-05 02:08:24.930302 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-05 02:08:24.930320 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-05 02:08:24.930327 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-05 02:08:24.930333 | orchestrator | 2026-02-05 02:08:24.930339 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-05 02:08:24.930346 | orchestrator | Thursday 05 February 2026 02:08:15 +0000 (0:00:02.039) 0:00:30.040 ***** 2026-02-05 02:08:24.930352 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-05 02:08:24.930358 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-05 02:08:24.930364 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-05 02:08:24.930370 | orchestrator | 2026-02-05 02:08:24.930393 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-05 02:08:24.930399 | orchestrator | Thursday 05 February 2026 02:08:17 +0000 (0:00:01.908) 0:00:31.949 ***** 2026-02-05 02:08:24.930405 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-05 02:08:24.930411 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-05 02:08:24.930417 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-05 02:08:24.930422 | orchestrator | 2026-02-05 02:08:24.930437 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-05 02:08:24.930444 | orchestrator | Thursday 05 February 2026 02:08:18 +0000 (0:00:01.665) 0:00:33.615 ***** 2026-02-05 02:08:24.930451 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-05 02:08:24.930456 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-05 02:08:24.930462 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-05 02:08:24.930467 | orchestrator | 2026-02-05 02:08:24.930489 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-05 02:08:24.930495 | orchestrator | Thursday 05 February 2026 02:08:20 +0000 (0:00:01.402) 0:00:35.017 ***** 2026-02-05 02:08:24.930501 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:08:24.930506 | orchestrator | 2026-02-05 02:08:24.930512 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-05 02:08:24.930517 | orchestrator | Thursday 05 February 2026 02:08:20 +0000 (0:00:00.541) 0:00:35.559 ***** 2026-02-05 02:08:24.930524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 02:08:24.930533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 02:08:24.930548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 02:08:24.930574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:24.930584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:24.930594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:24.930611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 02:08:24.930620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 02:08:24.930630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 02:08:24.930639 | orchestrator | 2026-02-05 02:08:24.930647 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-05 02:08:24.930657 | orchestrator | Thursday 05 February 2026 02:08:24 +0000 (0:00:03.671) 0:00:39.230 ***** 2026-02-05 02:08:24.930678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 02:08:25.751186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:25.751259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:25.751284 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:25.751291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 02:08:25.751299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:25.751306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:25.751311 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:25.751319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 02:08:25.751354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:25.751361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:25.751425 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:25.751433 | orchestrator | 2026-02-05 02:08:25.751440 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-05 02:08:25.751448 | orchestrator | Thursday 05 February 2026 02:08:24 +0000 (0:00:00.601) 0:00:39.831 ***** 2026-02-05 02:08:25.751456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 02:08:25.751462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:25.751469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:25.751476 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:25.751482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 02:08:25.751499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:26.555308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:26.555443 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:26.555461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 02:08:26.555469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:26.555477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:26.555482 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:26.555486 | orchestrator | 2026-02-05 02:08:26.555491 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-05 02:08:26.555496 | orchestrator | Thursday 05 February 2026 02:08:25 +0000 (0:00:00.811) 0:00:40.643 ***** 2026-02-05 02:08:26.555500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 02:08:26.555505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:26.555531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:26.555541 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:26.555545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 02:08:26.555549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:26.555553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:26.555557 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:26.555560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 02:08:26.555571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:26.555578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:26.555589 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:28.108786 | orchestrator | 2026-02-05 02:08:28.108869 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-05 02:08:28.108877 | orchestrator | Thursday 05 February 2026 02:08:26 +0000 (0:00:00.798) 0:00:41.441 ***** 2026-02-05 02:08:28.108884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 02:08:28.108894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:28.108906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:28.108917 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:28.108924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 02:08:28.108932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:28.108954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:28.108978 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:28.109001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 02:08:28.109008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:28.109015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:28.109023 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:28.109027 | orchestrator | 2026-02-05 02:08:28.109031 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-05 02:08:28.109035 | orchestrator | Thursday 05 February 2026 02:08:27 +0000 (0:00:00.805) 0:00:42.246 ***** 2026-02-05 02:08:28.109039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 02:08:28.109054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:28.109078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:28.109083 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:28.109092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 02:08:28.726248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:28.726353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:28.726365 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:28.726453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 02:08:28.726462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:28.726467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:28.726492 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:28.726497 | orchestrator | 2026-02-05 02:08:28.726502 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-05 02:08:28.726508 | orchestrator | Thursday 05 February 2026 02:08:28 +0000 (0:00:00.760) 0:00:43.007 ***** 2026-02-05 02:08:28.726522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 02:08:28.726540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:28.726545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:28.726549 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:28.726553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 02:08:28.726557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:28.726565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:28.726569 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:28.726575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 02:08:28.726582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:30.272152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:30.272234 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:30.272241 | orchestrator | 2026-02-05 02:08:30.272258 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-05 02:08:30.272265 | orchestrator | Thursday 05 February 2026 02:08:28 +0000 (0:00:00.613) 0:00:43.621 ***** 2026-02-05 02:08:30.272270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 02:08:30.272281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:30.272302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:30.272307 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:30.272312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 02:08:30.272333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:30.272352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:30.272359 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:30.272365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 02:08:30.272420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:30.272472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:30.272477 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:30.272481 | orchestrator | 2026-02-05 02:08:30.272485 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-05 02:08:30.272489 | orchestrator | Thursday 05 February 2026 02:08:29 +0000 (0:00:00.803) 0:00:44.425 ***** 2026-02-05 02:08:30.272493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 02:08:30.272497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:30.272513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:37.011611 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:37.011700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 02:08:37.011714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:37.011745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:37.011753 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:37.011760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 02:08:37.011780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 02:08:37.011787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 02:08:37.011794 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:37.011800 | orchestrator | 2026-02-05 02:08:37.011808 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-05 02:08:37.011816 | orchestrator | Thursday 05 February 2026 02:08:30 +0000 (0:00:00.747) 0:00:45.172 ***** 2026-02-05 02:08:37.011822 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-05 02:08:37.011843 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-05 02:08:37.011850 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-05 02:08:37.011856 | orchestrator | 2026-02-05 02:08:37.011862 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-05 02:08:37.011869 | orchestrator | Thursday 05 February 2026 02:08:31 +0000 (0:00:01.461) 0:00:46.634 ***** 2026-02-05 02:08:37.011876 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-05 02:08:37.011883 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-05 02:08:37.011890 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-05 02:08:37.011897 | orchestrator | 2026-02-05 02:08:37.011910 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-05 02:08:37.011915 | orchestrator | Thursday 05 February 2026 02:08:33 +0000 (0:00:01.928) 0:00:48.562 ***** 2026-02-05 02:08:37.011921 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 02:08:37.011928 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 02:08:37.011936 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 02:08:37.011943 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 02:08:37.011950 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:37.011956 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 02:08:37.011963 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:37.011969 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 02:08:37.011976 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:37.011983 | orchestrator | 2026-02-05 02:08:37.011990 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-05 02:08:37.011997 | orchestrator | Thursday 05 February 2026 02:08:34 +0000 (0:00:00.770) 0:00:49.332 ***** 2026-02-05 02:08:37.012004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 02:08:37.012013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 02:08:37.012025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 02:08:37.012036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:40.841411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:40.841500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 02:08:40.841512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 02:08:40.841524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 02:08:40.841530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 02:08:40.841537 | orchestrator | 2026-02-05 02:08:40.841560 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-05 02:08:40.841571 | orchestrator | Thursday 05 February 2026 02:08:36 +0000 (0:00:02.573) 0:00:51.905 ***** 2026-02-05 02:08:40.841580 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:08:40.841586 | orchestrator | 2026-02-05 02:08:40.841593 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-05 02:08:40.841599 | orchestrator | Thursday 05 February 2026 02:08:37 +0000 (0:00:00.705) 0:00:52.611 ***** 2026-02-05 02:08:40.841622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 02:08:40.841650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 02:08:40.841658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 02:08:40.841665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 02:08:40.841676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 02:08:40.841687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 02:08:40.841695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 02:08:40.841712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 02:08:41.440137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 02:08:41.440208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 02:08:41.440215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 02:08:41.440232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 02:08:41.440237 | orchestrator | 2026-02-05 02:08:41.440243 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-05 02:08:41.440248 | orchestrator | Thursday 05 February 2026 02:08:40 +0000 (0:00:03.123) 0:00:55.734 ***** 2026-02-05 02:08:41.440253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 02:08:41.440282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 02:08:41.440287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 02:08:41.440291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 02:08:41.440296 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:41.440301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 02:08:41.440308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 02:08:41.440316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 02:08:41.440323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.126153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 02:08:50.126242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.126253 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:50.126263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.126271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.126301 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:50.126310 | orchestrator | 2026-02-05 02:08:50.126319 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-05 02:08:50.126328 | orchestrator | Thursday 05 February 2026 02:08:41 +0000 (0:00:00.604) 0:00:56.339 ***** 2026-02-05 02:08:50.126336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-05 02:08:50.126345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-05 02:08:50.126354 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:50.126396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-05 02:08:50.126405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-05 02:08:50.126412 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:50.126420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-05 02:08:50.126427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-05 02:08:50.126434 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:50.126441 | orchestrator | 2026-02-05 02:08:50.126449 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-05 02:08:50.126456 | orchestrator | Thursday 05 February 2026 02:08:42 +0000 (0:00:00.967) 0:00:57.306 ***** 2026-02-05 02:08:50.126479 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:08:50.126488 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:08:50.126501 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:08:50.126508 | orchestrator | 2026-02-05 02:08:50.126516 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-05 02:08:50.126524 | orchestrator | Thursday 05 February 2026 02:08:43 +0000 (0:00:01.327) 0:00:58.633 ***** 2026-02-05 02:08:50.126530 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:08:50.126537 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:08:50.126544 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:08:50.126551 | orchestrator | 2026-02-05 02:08:50.126558 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-05 02:08:50.126565 | orchestrator | Thursday 05 February 2026 02:08:45 +0000 (0:00:02.080) 0:01:00.714 ***** 2026-02-05 02:08:50.126572 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:08:50.126579 | orchestrator | 2026-02-05 02:08:50.126586 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-05 02:08:50.126593 | orchestrator | Thursday 05 February 2026 02:08:46 +0000 (0:00:00.646) 0:01:01.360 ***** 2026-02-05 02:08:50.126603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 02:08:50.126632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.126642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.126650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 02:08:50.126664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.733685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.733793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 02:08:50.733818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.733827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.733835 | orchestrator | 2026-02-05 02:08:50.733844 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-05 02:08:50.733852 | orchestrator | Thursday 05 February 2026 02:08:50 +0000 (0:00:03.664) 0:01:05.025 ***** 2026-02-05 02:08:50.733860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 02:08:50.733884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.733898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.733905 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:50.733918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 02:08:50.733928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.733940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:08:50.733952 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:50.733994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 02:08:59.904407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 02:08:59.904501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:08:59.904510 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:59.904517 | orchestrator | 2026-02-05 02:08:59.904523 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-05 02:08:59.904528 | orchestrator | Thursday 05 February 2026 02:08:50 +0000 (0:00:00.604) 0:01:05.629 ***** 2026-02-05 02:08:59.904546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 02:08:59.904552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 02:08:59.904558 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:59.904562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 02:08:59.904566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 02:08:59.904570 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:59.904574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 02:08:59.904578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 02:08:59.904582 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:59.904586 | orchestrator | 2026-02-05 02:08:59.904590 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-05 02:08:59.904594 | orchestrator | Thursday 05 February 2026 02:08:51 +0000 (0:00:00.791) 0:01:06.420 ***** 2026-02-05 02:08:59.904598 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:08:59.904602 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:08:59.904607 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:08:59.904610 | orchestrator | 2026-02-05 02:08:59.904614 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-05 02:08:59.904618 | orchestrator | Thursday 05 February 2026 02:08:52 +0000 (0:00:01.273) 0:01:07.694 ***** 2026-02-05 02:08:59.904638 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:08:59.904642 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:08:59.904646 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:08:59.904649 | orchestrator | 2026-02-05 02:08:59.904653 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-05 02:08:59.904657 | orchestrator | Thursday 05 February 2026 02:08:54 +0000 (0:00:02.003) 0:01:09.697 ***** 2026-02-05 02:08:59.904661 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:59.904665 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:08:59.904668 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:08:59.904672 | orchestrator | 2026-02-05 02:08:59.904676 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-05 02:08:59.904680 | orchestrator | Thursday 05 February 2026 02:08:55 +0000 (0:00:00.469) 0:01:10.166 ***** 2026-02-05 02:08:59.904683 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:08:59.904687 | orchestrator | 2026-02-05 02:08:59.904691 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-05 02:08:59.904706 | orchestrator | Thursday 05 February 2026 02:08:55 +0000 (0:00:00.660) 0:01:10.826 ***** 2026-02-05 02:08:59.904712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-05 02:08:59.904720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-05 02:08:59.904724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-05 02:08:59.904728 | orchestrator | 2026-02-05 02:08:59.904732 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-05 02:08:59.904737 | orchestrator | Thursday 05 February 2026 02:08:58 +0000 (0:00:02.455) 0:01:13.282 ***** 2026-02-05 02:08:59.904745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-05 02:08:59.904750 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:08:59.904774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-05 02:09:06.965346 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:06.965502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-05 02:09:06.965511 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:06.965516 | orchestrator | 2026-02-05 02:09:06.965521 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-05 02:09:06.965527 | orchestrator | Thursday 05 February 2026 02:08:59 +0000 (0:00:01.519) 0:01:14.802 ***** 2026-02-05 02:09:06.965543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 02:09:06.965550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 02:09:06.965555 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:06.965559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 02:09:06.965578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 02:09:06.965582 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:06.965586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 02:09:06.965590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 02:09:06.965594 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:06.965598 | orchestrator | 2026-02-05 02:09:06.965602 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-05 02:09:06.965606 | orchestrator | Thursday 05 February 2026 02:09:01 +0000 (0:00:01.506) 0:01:16.309 ***** 2026-02-05 02:09:06.965610 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:06.965614 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:06.965617 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:06.965621 | orchestrator | 2026-02-05 02:09:06.965627 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-05 02:09:06.965643 | orchestrator | Thursday 05 February 2026 02:09:01 +0000 (0:00:00.400) 0:01:16.710 ***** 2026-02-05 02:09:06.965647 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:06.965651 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:06.965654 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:06.965658 | orchestrator | 2026-02-05 02:09:06.965662 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-05 02:09:06.965666 | orchestrator | Thursday 05 February 2026 02:09:02 +0000 (0:00:01.175) 0:01:17.886 ***** 2026-02-05 02:09:06.965670 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:09:06.965674 | orchestrator | 2026-02-05 02:09:06.965678 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-05 02:09:06.965681 | orchestrator | Thursday 05 February 2026 02:09:03 +0000 (0:00:00.869) 0:01:18.755 ***** 2026-02-05 02:09:06.965689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 02:09:06.965699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 02:09:06.965704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:09:06.965711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 02:09:06.965720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:09:07.625168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 02:09:07.625274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 02:09:07.625309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 02:09:07.625317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 02:09:07.625325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:09:07.625348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 02:09:07.625466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 02:09:07.625482 | orchestrator | 2026-02-05 02:09:07.625490 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-05 02:09:07.625498 | orchestrator | Thursday 05 February 2026 02:09:07 +0000 (0:00:03.193) 0:01:21.949 ***** 2026-02-05 02:09:07.625508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 02:09:07.625518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:09:07.625524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 02:09:07.625536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 02:09:12.041966 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:12.042095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 02:09:12.042132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:09:12.042144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 02:09:12.042150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 02:09:12.042155 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:12.042160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 02:09:12.042178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:09:12.042191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 02:09:12.042196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 02:09:12.042201 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:12.042206 | orchestrator | 2026-02-05 02:09:12.042212 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-05 02:09:12.042218 | orchestrator | Thursday 05 February 2026 02:09:07 +0000 (0:00:00.679) 0:01:22.629 ***** 2026-02-05 02:09:12.042223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 02:09:12.042230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 02:09:12.042236 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:12.042241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 02:09:12.042245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 02:09:12.042250 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:12.042255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 02:09:12.042259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 02:09:12.042264 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:12.042269 | orchestrator | 2026-02-05 02:09:12.042273 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-05 02:09:12.042278 | orchestrator | Thursday 05 February 2026 02:09:08 +0000 (0:00:01.004) 0:01:23.634 ***** 2026-02-05 02:09:12.042283 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:09:12.042291 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:09:12.042296 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:09:12.042301 | orchestrator | 2026-02-05 02:09:12.042305 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-05 02:09:12.042310 | orchestrator | Thursday 05 February 2026 02:09:10 +0000 (0:00:01.364) 0:01:24.999 ***** 2026-02-05 02:09:12.042315 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:09:12.042320 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:09:12.042336 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:09:12.042341 | orchestrator | 2026-02-05 02:09:12.042378 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-05 02:09:16.698148 | orchestrator | Thursday 05 February 2026 02:09:12 +0000 (0:00:01.937) 0:01:26.936 ***** 2026-02-05 02:09:16.698228 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:16.698240 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:16.698247 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:16.698253 | orchestrator | 2026-02-05 02:09:16.698259 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-05 02:09:16.698266 | orchestrator | Thursday 05 February 2026 02:09:12 +0000 (0:00:00.314) 0:01:27.250 ***** 2026-02-05 02:09:16.698272 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:16.698279 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:16.698286 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:16.698292 | orchestrator | 2026-02-05 02:09:16.698298 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-05 02:09:16.698305 | orchestrator | Thursday 05 February 2026 02:09:12 +0000 (0:00:00.287) 0:01:27.538 ***** 2026-02-05 02:09:16.698312 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:09:16.698318 | orchestrator | 2026-02-05 02:09:16.698325 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-05 02:09:16.698344 | orchestrator | Thursday 05 February 2026 02:09:13 +0000 (0:00:00.964) 0:01:28.502 ***** 2026-02-05 02:09:16.698399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 02:09:16.698409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 02:09:16.698416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 02:09:16.698440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 02:09:16.698459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 02:09:16.698468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 02:09:16.698472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 02:09:16.698476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:09:16.698480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 02:09:16.698488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 02:09:16.698496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.598760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.598863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.598876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 02:09:17.598883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.598904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 02:09:17.598912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.598930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.598939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.598945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.598951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.598961 | orchestrator | 2026-02-05 02:09:17.598969 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-05 02:09:17.598975 | orchestrator | Thursday 05 February 2026 02:09:16 +0000 (0:00:03.329) 0:01:31.832 ***** 2026-02-05 02:09:17.598981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 02:09:17.598987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 02:09:17.598998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.797947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.798088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.798102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.798141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.798155 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:17.798170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 02:09:17.798184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 02:09:17.798865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.798910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.798918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.798943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 02:09:17.798951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:09:17.798960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 02:09:17.798982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 02:09:27.202127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 02:09:27.202180 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:27.202203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 02:09:27.202209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 02:09:27.202223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:09:27.202227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 02:09:27.202231 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:27.202235 | orchestrator | 2026-02-05 02:09:27.202240 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-05 02:09:27.202245 | orchestrator | Thursday 05 February 2026 02:09:17 +0000 (0:00:00.862) 0:01:32.695 ***** 2026-02-05 02:09:27.202250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-05 02:09:27.202256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-05 02:09:27.202261 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:27.202265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-05 02:09:27.202277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-05 02:09:27.202281 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:27.202284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-05 02:09:27.202292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-05 02:09:27.202296 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:27.202300 | orchestrator | 2026-02-05 02:09:27.202303 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-05 02:09:27.202307 | orchestrator | Thursday 05 February 2026 02:09:18 +0000 (0:00:01.108) 0:01:33.803 ***** 2026-02-05 02:09:27.202311 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:09:27.202315 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:09:27.202319 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:09:27.202323 | orchestrator | 2026-02-05 02:09:27.202327 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-05 02:09:27.202331 | orchestrator | Thursday 05 February 2026 02:09:20 +0000 (0:00:01.329) 0:01:35.133 ***** 2026-02-05 02:09:27.202334 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:09:27.202338 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:09:27.202342 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:09:27.202345 | orchestrator | 2026-02-05 02:09:27.202399 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-05 02:09:27.202403 | orchestrator | Thursday 05 February 2026 02:09:22 +0000 (0:00:01.949) 0:01:37.082 ***** 2026-02-05 02:09:27.202407 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:27.202411 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:27.202414 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:27.202418 | orchestrator | 2026-02-05 02:09:27.202422 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-05 02:09:27.202426 | orchestrator | Thursday 05 February 2026 02:09:22 +0000 (0:00:00.295) 0:01:37.377 ***** 2026-02-05 02:09:27.202430 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:09:27.202434 | orchestrator | 2026-02-05 02:09:27.202437 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-05 02:09:27.202441 | orchestrator | Thursday 05 February 2026 02:09:23 +0000 (0:00:00.806) 0:01:38.184 ***** 2026-02-05 02:09:27.202451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 02:09:27.202488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 02:09:29.970996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 02:09:29.971075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 02:09:29.971118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 02:09:29.971125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 02:09:29.971134 | orchestrator | 2026-02-05 02:09:29.971140 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-05 02:09:29.971146 | orchestrator | Thursday 05 February 2026 02:09:27 +0000 (0:00:04.027) 0:01:42.211 ***** 2026-02-05 02:09:29.971158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 02:09:30.078848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 02:09:30.078952 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:30.078966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 02:09:30.079003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 02:09:30.079014 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:30.079018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 02:09:30.079029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 02:09:41.456731 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:41.456817 | orchestrator | 2026-02-05 02:09:41.456825 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-05 02:09:41.456845 | orchestrator | Thursday 05 February 2026 02:09:30 +0000 (0:00:02.766) 0:01:44.978 ***** 2026-02-05 02:09:41.456851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 02:09:41.456858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 02:09:41.456864 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:41.456868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 02:09:41.456872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 02:09:41.456876 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:41.456880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 02:09:41.456902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 02:09:41.456907 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:41.456911 | orchestrator | 2026-02-05 02:09:41.456915 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-05 02:09:41.456919 | orchestrator | Thursday 05 February 2026 02:09:33 +0000 (0:00:03.012) 0:01:47.991 ***** 2026-02-05 02:09:41.456938 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:09:41.456942 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:09:41.456946 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:09:41.456950 | orchestrator | 2026-02-05 02:09:41.456953 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-05 02:09:41.456957 | orchestrator | Thursday 05 February 2026 02:09:34 +0000 (0:00:01.577) 0:01:49.569 ***** 2026-02-05 02:09:41.456961 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:09:41.456965 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:09:41.456968 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:09:41.456972 | orchestrator | 2026-02-05 02:09:41.456976 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-05 02:09:41.456998 | orchestrator | Thursday 05 February 2026 02:09:36 +0000 (0:00:01.861) 0:01:51.430 ***** 2026-02-05 02:09:41.457002 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:41.457006 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:41.457010 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:41.457013 | orchestrator | 2026-02-05 02:09:41.457017 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-05 02:09:41.457021 | orchestrator | Thursday 05 February 2026 02:09:37 +0000 (0:00:00.489) 0:01:51.920 ***** 2026-02-05 02:09:41.457025 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:09:41.457029 | orchestrator | 2026-02-05 02:09:41.457032 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-05 02:09:41.457036 | orchestrator | Thursday 05 February 2026 02:09:37 +0000 (0:00:00.800) 0:01:52.720 ***** 2026-02-05 02:09:41.457041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 02:09:41.457045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 02:09:41.457049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 02:09:41.457053 | orchestrator | 2026-02-05 02:09:41.457057 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-05 02:09:41.457066 | orchestrator | Thursday 05 February 2026 02:09:40 +0000 (0:00:03.015) 0:01:55.735 ***** 2026-02-05 02:09:41.457070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 02:09:41.457074 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:41.457081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 02:09:50.452168 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:50.452263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 02:09:50.452436 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:50.452456 | orchestrator | 2026-02-05 02:09:50.452464 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-05 02:09:50.452471 | orchestrator | Thursday 05 February 2026 02:09:41 +0000 (0:00:00.620) 0:01:56.355 ***** 2026-02-05 02:09:50.452478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-05 02:09:50.452486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-05 02:09:50.452494 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:50.452500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-05 02:09:50.452508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-05 02:09:50.452515 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:50.452522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-05 02:09:50.452528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-05 02:09:50.452550 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:50.452556 | orchestrator | 2026-02-05 02:09:50.452562 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-05 02:09:50.452568 | orchestrator | Thursday 05 February 2026 02:09:42 +0000 (0:00:00.623) 0:01:56.979 ***** 2026-02-05 02:09:50.452574 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:09:50.452580 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:09:50.452586 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:09:50.452592 | orchestrator | 2026-02-05 02:09:50.452598 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-05 02:09:50.452604 | orchestrator | Thursday 05 February 2026 02:09:43 +0000 (0:00:01.420) 0:01:58.400 ***** 2026-02-05 02:09:50.452610 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:09:50.452617 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:09:50.452622 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:09:50.452628 | orchestrator | 2026-02-05 02:09:50.452634 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-05 02:09:50.452646 | orchestrator | Thursday 05 February 2026 02:09:45 +0000 (0:00:02.037) 0:02:00.438 ***** 2026-02-05 02:09:50.452653 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:50.452659 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:50.452666 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:50.452672 | orchestrator | 2026-02-05 02:09:50.452679 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-05 02:09:50.452685 | orchestrator | Thursday 05 February 2026 02:09:46 +0000 (0:00:00.528) 0:02:00.966 ***** 2026-02-05 02:09:50.452691 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:09:50.452697 | orchestrator | 2026-02-05 02:09:50.452703 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-05 02:09:50.452709 | orchestrator | Thursday 05 February 2026 02:09:47 +0000 (0:00:00.957) 0:02:01.924 ***** 2026-02-05 02:09:50.452742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 02:09:50.452763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 02:09:50.452777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 02:09:52.441091 | orchestrator | 2026-02-05 02:09:52.441171 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-05 02:09:52.441182 | orchestrator | Thursday 05 February 2026 02:09:50 +0000 (0:00:03.426) 0:02:05.350 ***** 2026-02-05 02:09:52.441210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 02:09:52.441221 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:09:52.441242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 02:09:52.441269 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:09:52.441281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 02:09:52.441288 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:09:52.441295 | orchestrator | 2026-02-05 02:09:52.441301 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-05 02:09:52.441308 | orchestrator | Thursday 05 February 2026 02:09:51 +0000 (0:00:01.068) 0:02:06.418 ***** 2026-02-05 02:09:52.441316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 02:09:52.441360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 02:09:52.441376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 02:09:52.441390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 02:10:00.837792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-05 02:10:00.837903 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:00.837932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 02:10:00.837957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 02:10:00.838081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 02:10:00.838107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 02:10:00.838129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-05 02:10:00.838150 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:00.838167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 02:10:00.838178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 02:10:00.838190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 02:10:00.838228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 02:10:00.838241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-05 02:10:00.838252 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:00.838264 | orchestrator | 2026-02-05 02:10:00.838276 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-05 02:10:00.838289 | orchestrator | Thursday 05 February 2026 02:09:52 +0000 (0:00:00.921) 0:02:07.340 ***** 2026-02-05 02:10:00.838300 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:10:00.838311 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:10:00.838324 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:10:00.838379 | orchestrator | 2026-02-05 02:10:00.838443 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-05 02:10:00.838463 | orchestrator | Thursday 05 February 2026 02:09:53 +0000 (0:00:01.360) 0:02:08.700 ***** 2026-02-05 02:10:00.838482 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:10:00.838502 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:10:00.838520 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:10:00.838538 | orchestrator | 2026-02-05 02:10:00.838558 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-05 02:10:00.838573 | orchestrator | Thursday 05 February 2026 02:09:55 +0000 (0:00:01.966) 0:02:10.667 ***** 2026-02-05 02:10:00.838587 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:00.838600 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:00.838634 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:00.838648 | orchestrator | 2026-02-05 02:10:00.838661 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-05 02:10:00.838674 | orchestrator | Thursday 05 February 2026 02:09:56 +0000 (0:00:00.293) 0:02:10.960 ***** 2026-02-05 02:10:00.838688 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:00.838701 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:00.838714 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:00.838725 | orchestrator | 2026-02-05 02:10:00.838736 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-05 02:10:00.838747 | orchestrator | Thursday 05 February 2026 02:09:56 +0000 (0:00:00.500) 0:02:11.461 ***** 2026-02-05 02:10:00.838758 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:10:00.838768 | orchestrator | 2026-02-05 02:10:00.838779 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-05 02:10:00.838790 | orchestrator | Thursday 05 February 2026 02:09:57 +0000 (0:00:00.965) 0:02:12.426 ***** 2026-02-05 02:10:00.838816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:10:00.838846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:10:00.838860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:10:00.838874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:10:00.838895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:10:01.948133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:10:01.948228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:10:01.948271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:10:01.948284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:10:01.948296 | orchestrator | 2026-02-05 02:10:01.948308 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-05 02:10:01.948320 | orchestrator | Thursday 05 February 2026 02:10:00 +0000 (0:00:03.309) 0:02:15.735 ***** 2026-02-05 02:10:01.948369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 02:10:01.948389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:10:01.948400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:10:01.948420 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:01.948432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 02:10:01.948439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:10:01.948445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:10:01.948451 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:01.948467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 02:10:10.860627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:10:10.860700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:10:10.860708 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:10.860714 | orchestrator | 2026-02-05 02:10:10.860719 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-05 02:10:10.860724 | orchestrator | Thursday 05 February 2026 02:10:01 +0000 (0:00:01.104) 0:02:16.840 ***** 2026-02-05 02:10:10.860730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 02:10:10.860736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 02:10:10.860741 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:10.860746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 02:10:10.860750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 02:10:10.860754 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:10.860758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 02:10:10.860763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 02:10:10.860767 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:10.860770 | orchestrator | 2026-02-05 02:10:10.860775 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-05 02:10:10.860781 | orchestrator | Thursday 05 February 2026 02:10:02 +0000 (0:00:00.794) 0:02:17.634 ***** 2026-02-05 02:10:10.860787 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:10:10.860793 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:10:10.860818 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:10:10.860825 | orchestrator | 2026-02-05 02:10:10.860831 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-05 02:10:10.860839 | orchestrator | Thursday 05 February 2026 02:10:04 +0000 (0:00:01.312) 0:02:18.947 ***** 2026-02-05 02:10:10.860845 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:10:10.860850 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:10:10.860856 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:10:10.860863 | orchestrator | 2026-02-05 02:10:10.860869 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-05 02:10:10.860876 | orchestrator | Thursday 05 February 2026 02:10:06 +0000 (0:00:02.044) 0:02:20.992 ***** 2026-02-05 02:10:10.860883 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:10.860897 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:10.860901 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:10.860905 | orchestrator | 2026-02-05 02:10:10.860909 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-05 02:10:10.860939 | orchestrator | Thursday 05 February 2026 02:10:06 +0000 (0:00:00.294) 0:02:21.286 ***** 2026-02-05 02:10:10.860944 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:10:10.860951 | orchestrator | 2026-02-05 02:10:10.860956 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-05 02:10:10.860962 | orchestrator | Thursday 05 February 2026 02:10:07 +0000 (0:00:01.218) 0:02:22.505 ***** 2026-02-05 02:10:10.860970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 02:10:10.860980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:10:10.860987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 02:10:10.861000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:10:10.861014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 02:10:16.170755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:10:16.170833 | orchestrator | 2026-02-05 02:10:16.170840 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-05 02:10:16.170846 | orchestrator | Thursday 05 February 2026 02:10:10 +0000 (0:00:03.254) 0:02:25.760 ***** 2026-02-05 02:10:16.170852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 02:10:16.170888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:10:16.170908 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:16.170917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 02:10:16.170931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:10:16.170936 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:16.170940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 02:10:16.170944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:10:16.170953 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:16.170957 | orchestrator | 2026-02-05 02:10:16.170961 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-05 02:10:16.170965 | orchestrator | Thursday 05 February 2026 02:10:11 +0000 (0:00:00.867) 0:02:26.627 ***** 2026-02-05 02:10:16.170970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-05 02:10:16.170976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-05 02:10:16.170981 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:16.170985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-05 02:10:16.170989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-05 02:10:16.170993 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:16.170996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-05 02:10:16.171000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-05 02:10:16.171004 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:16.171008 | orchestrator | 2026-02-05 02:10:16.171014 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-05 02:10:16.171018 | orchestrator | Thursday 05 February 2026 02:10:12 +0000 (0:00:01.137) 0:02:27.765 ***** 2026-02-05 02:10:16.171022 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:10:16.171026 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:10:16.171030 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:10:16.171034 | orchestrator | 2026-02-05 02:10:16.171038 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-05 02:10:16.171042 | orchestrator | Thursday 05 February 2026 02:10:14 +0000 (0:00:01.320) 0:02:29.085 ***** 2026-02-05 02:10:16.171045 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:10:16.171049 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:10:16.171053 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:10:16.171057 | orchestrator | 2026-02-05 02:10:16.171061 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-05 02:10:16.171067 | orchestrator | Thursday 05 February 2026 02:10:16 +0000 (0:00:01.978) 0:02:31.064 ***** 2026-02-05 02:10:20.622710 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:10:20.622796 | orchestrator | 2026-02-05 02:10:20.622803 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-05 02:10:20.622808 | orchestrator | Thursday 05 February 2026 02:10:17 +0000 (0:00:01.310) 0:02:32.374 ***** 2026-02-05 02:10:20.622814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 02:10:20.622840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:10:20.622845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 02:10:20.622851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 02:10:20.622866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 02:10:20.622883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:10:20.622887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 02:10:20.622896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 02:10:20.622900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:10:20.622904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 02:10:20.622911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 02:10:20.622919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 02:10:21.270727 | orchestrator | 2026-02-05 02:10:21.270820 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-05 02:10:21.270833 | orchestrator | Thursday 05 February 2026 02:10:20 +0000 (0:00:03.221) 0:02:35.596 ***** 2026-02-05 02:10:21.270868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 02:10:21.270875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:10:21.270882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 02:10:21.270887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 02:10:21.270891 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:21.270908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 02:10:21.270925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:10:21.270934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 02:10:21.270938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 02:10:21.270942 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:21.270946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 02:10:21.270950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:10:21.270963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 02:10:21.270972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 02:10:31.529583 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:31.529685 | orchestrator | 2026-02-05 02:10:31.529693 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-05 02:10:31.529699 | orchestrator | Thursday 05 February 2026 02:10:21 +0000 (0:00:00.659) 0:02:36.255 ***** 2026-02-05 02:10:31.529705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-05 02:10:31.529711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-05 02:10:31.529717 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:31.529722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-05 02:10:31.529726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-05 02:10:31.529730 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:31.529734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-05 02:10:31.529738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-05 02:10:31.529742 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:31.529746 | orchestrator | 2026-02-05 02:10:31.529749 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-05 02:10:31.529753 | orchestrator | Thursday 05 February 2026 02:10:22 +0000 (0:00:00.800) 0:02:37.055 ***** 2026-02-05 02:10:31.529757 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:10:31.529761 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:10:31.529765 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:10:31.529769 | orchestrator | 2026-02-05 02:10:31.529773 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-05 02:10:31.529776 | orchestrator | Thursday 05 February 2026 02:10:23 +0000 (0:00:01.425) 0:02:38.480 ***** 2026-02-05 02:10:31.529780 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:10:31.529784 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:10:31.529788 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:10:31.529792 | orchestrator | 2026-02-05 02:10:31.529796 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-05 02:10:31.529799 | orchestrator | Thursday 05 February 2026 02:10:25 +0000 (0:00:01.946) 0:02:40.427 ***** 2026-02-05 02:10:31.529803 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:10:31.529807 | orchestrator | 2026-02-05 02:10:31.529811 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-05 02:10:31.529815 | orchestrator | Thursday 05 February 2026 02:10:26 +0000 (0:00:00.974) 0:02:41.401 ***** 2026-02-05 02:10:31.529819 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 02:10:31.529823 | orchestrator | 2026-02-05 02:10:31.529827 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-05 02:10:31.529847 | orchestrator | Thursday 05 February 2026 02:10:29 +0000 (0:00:03.046) 0:02:44.448 ***** 2026-02-05 02:10:31.529877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:10:31.529891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 02:10:31.529902 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:31.529909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:10:31.529918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 02:10:31.529922 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:31.529936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:10:33.695685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 02:10:33.695799 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:33.695817 | orchestrator | 2026-02-05 02:10:33.695827 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-05 02:10:33.695837 | orchestrator | Thursday 05 February 2026 02:10:31 +0000 (0:00:01.974) 0:02:46.422 ***** 2026-02-05 02:10:33.695885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:10:33.695897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 02:10:33.695905 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:33.695933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:10:33.695957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 02:10:33.695970 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:33.695988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:10:33.696017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 02:10:43.119258 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:43.119392 | orchestrator | 2026-02-05 02:10:43.119407 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-05 02:10:43.119418 | orchestrator | Thursday 05 February 2026 02:10:33 +0000 (0:00:02.172) 0:02:48.595 ***** 2026-02-05 02:10:43.119428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 02:10:43.119460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 02:10:43.119481 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:43.119490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 02:10:43.119498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 02:10:43.119505 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:43.119513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 02:10:43.119521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 02:10:43.119528 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:43.119535 | orchestrator | 2026-02-05 02:10:43.119542 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-05 02:10:43.119550 | orchestrator | Thursday 05 February 2026 02:10:35 +0000 (0:00:02.212) 0:02:50.808 ***** 2026-02-05 02:10:43.119567 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:10:43.119602 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:10:43.119611 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:10:43.119618 | orchestrator | 2026-02-05 02:10:43.119625 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-05 02:10:43.119632 | orchestrator | Thursday 05 February 2026 02:10:37 +0000 (0:00:02.072) 0:02:52.880 ***** 2026-02-05 02:10:43.119639 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:43.119646 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:43.119653 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:43.119661 | orchestrator | 2026-02-05 02:10:43.119668 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-05 02:10:43.119675 | orchestrator | Thursday 05 February 2026 02:10:39 +0000 (0:00:01.502) 0:02:54.383 ***** 2026-02-05 02:10:43.119682 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:43.119696 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:43.119703 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:43.119710 | orchestrator | 2026-02-05 02:10:43.119717 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-05 02:10:43.119725 | orchestrator | Thursday 05 February 2026 02:10:40 +0000 (0:00:00.543) 0:02:54.927 ***** 2026-02-05 02:10:43.119733 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:10:43.119742 | orchestrator | 2026-02-05 02:10:43.119749 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-05 02:10:43.119756 | orchestrator | Thursday 05 February 2026 02:10:41 +0000 (0:00:01.169) 0:02:56.096 ***** 2026-02-05 02:10:43.119769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 02:10:43.119780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 02:10:43.119787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 02:10:43.119795 | orchestrator | 2026-02-05 02:10:43.119802 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-05 02:10:43.119816 | orchestrator | Thursday 05 February 2026 02:10:42 +0000 (0:00:01.495) 0:02:57.591 ***** 2026-02-05 02:10:43.119829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 02:10:51.352925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 02:10:51.353031 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:51.353044 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:51.353051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 02:10:51.353059 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:51.353065 | orchestrator | 2026-02-05 02:10:51.353073 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-05 02:10:51.353081 | orchestrator | Thursday 05 February 2026 02:10:43 +0000 (0:00:00.636) 0:02:58.227 ***** 2026-02-05 02:10:51.353090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-05 02:10:51.353097 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:51.353104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-05 02:10:51.353111 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:51.353117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-05 02:10:51.353146 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:51.353153 | orchestrator | 2026-02-05 02:10:51.353197 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-05 02:10:51.353205 | orchestrator | Thursday 05 February 2026 02:10:43 +0000 (0:00:00.623) 0:02:58.850 ***** 2026-02-05 02:10:51.353211 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:51.353218 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:51.353225 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:51.353232 | orchestrator | 2026-02-05 02:10:51.353239 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-05 02:10:51.353245 | orchestrator | Thursday 05 February 2026 02:10:44 +0000 (0:00:00.431) 0:02:59.282 ***** 2026-02-05 02:10:51.353252 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:51.353259 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:51.353266 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:51.353273 | orchestrator | 2026-02-05 02:10:51.353280 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-05 02:10:51.353288 | orchestrator | Thursday 05 February 2026 02:10:45 +0000 (0:00:01.528) 0:03:00.810 ***** 2026-02-05 02:10:51.353295 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:51.353302 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:51.353309 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:10:51.353353 | orchestrator | 2026-02-05 02:10:51.353361 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-05 02:10:51.353368 | orchestrator | Thursday 05 February 2026 02:10:46 +0000 (0:00:00.289) 0:03:01.100 ***** 2026-02-05 02:10:51.353375 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:10:51.353382 | orchestrator | 2026-02-05 02:10:51.353389 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-05 02:10:51.353396 | orchestrator | Thursday 05 February 2026 02:10:47 +0000 (0:00:01.424) 0:03:02.525 ***** 2026-02-05 02:10:51.353421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:10:51.353437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.353444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.353461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:10:51.353469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.353482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.452569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 02:10:51.452695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.452755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.452777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.452799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:51.452846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 02:10:51.452876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:51.452896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.452925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.452937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:51.452949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:10:51.453018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:51.453053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.617653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.617765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 02:10:51.617780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:10:51.617790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:51.617797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.617807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.617835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 02:10:51.617851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 02:10:51.617861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:51.617869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.617878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:10:51.617893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 02:10:51.903531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:10:51.903628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:10:51.903642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.903655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.903669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.903701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 02:10:51.903723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.903737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:51.903750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:51.903763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:51.903771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:10:51.903788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.068098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 02:10:53.068227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:53.068253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.068274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 02:10:53.068296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:10:53.068373 | orchestrator | 2026-02-05 02:10:53.068396 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-05 02:10:53.068449 | orchestrator | Thursday 05 February 2026 02:10:51 +0000 (0:00:04.278) 0:03:06.803 ***** 2026-02-05 02:10:53.068512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:10:53.068534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.068553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.068571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.068591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 02:10:53.068642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.164149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:53.164219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:53.164229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.164234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:10:53.164239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.164260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:10:53.164288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 02:10:53.164292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.164297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:53.164302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.164306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.164352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.164366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 02:10:53.268768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 02:10:53.268849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:10:53.268859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.268887 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:10:53.268897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:53.268906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:53.268925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.268948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:10:53.268960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.268971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 02:10:53.268990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:53.269001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:10:53.269014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.269033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.674915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 02:10:53.675062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.675108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:10:53.675123 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:10:53.675142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.675152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 02:10:53.675175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.675183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:53.675197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:10:53.675205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.675215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:10:53.675222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 02:10:53.675233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 02:11:02.317500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 02:11:02.317600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 02:11:02.317635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 02:11:02.317660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:11:02.317669 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:02.317678 | orchestrator | 2026-02-05 02:11:02.317687 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-05 02:11:02.317695 | orchestrator | Thursday 05 February 2026 02:10:53 +0000 (0:00:01.766) 0:03:08.570 ***** 2026-02-05 02:11:02.317703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-05 02:11:02.317713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-05 02:11:02.317722 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:02.317729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-05 02:11:02.317736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-05 02:11:02.317743 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:02.317789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-05 02:11:02.317797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-05 02:11:02.317811 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:02.317816 | orchestrator | 2026-02-05 02:11:02.317821 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-05 02:11:02.317825 | orchestrator | Thursday 05 February 2026 02:10:54 +0000 (0:00:01.330) 0:03:09.900 ***** 2026-02-05 02:11:02.317830 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:11:02.317834 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:11:02.317839 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:11:02.317843 | orchestrator | 2026-02-05 02:11:02.317848 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-05 02:11:02.317852 | orchestrator | Thursday 05 February 2026 02:10:56 +0000 (0:00:01.262) 0:03:11.162 ***** 2026-02-05 02:11:02.317856 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:11:02.317861 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:11:02.317865 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:11:02.317869 | orchestrator | 2026-02-05 02:11:02.317873 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-05 02:11:02.317878 | orchestrator | Thursday 05 February 2026 02:10:58 +0000 (0:00:01.867) 0:03:13.030 ***** 2026-02-05 02:11:02.317882 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:11:02.317886 | orchestrator | 2026-02-05 02:11:02.317891 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-05 02:11:02.317895 | orchestrator | Thursday 05 February 2026 02:10:59 +0000 (0:00:01.298) 0:03:14.329 ***** 2026-02-05 02:11:02.317901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:11:02.317912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:11:02.317917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:11:02.317926 | orchestrator | 2026-02-05 02:11:02.317931 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-05 02:11:02.317941 | orchestrator | Thursday 05 February 2026 02:11:02 +0000 (0:00:02.882) 0:03:17.212 ***** 2026-02-05 02:11:12.923532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 02:11:12.923634 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:12.923646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 02:11:12.923653 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:12.923675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 02:11:12.923683 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:12.923690 | orchestrator | 2026-02-05 02:11:12.923697 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-05 02:11:12.923704 | orchestrator | Thursday 05 February 2026 02:11:02 +0000 (0:00:00.510) 0:03:17.722 ***** 2026-02-05 02:11:12.923712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 02:11:12.923743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 02:11:12.923752 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:12.923758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 02:11:12.923764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 02:11:12.923770 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:12.923792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 02:11:12.923799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 02:11:12.923804 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:12.923811 | orchestrator | 2026-02-05 02:11:12.923817 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-05 02:11:12.923823 | orchestrator | Thursday 05 February 2026 02:11:03 +0000 (0:00:00.965) 0:03:18.688 ***** 2026-02-05 02:11:12.923828 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:11:12.923834 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:11:12.923839 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:11:12.923845 | orchestrator | 2026-02-05 02:11:12.923850 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-05 02:11:12.923856 | orchestrator | Thursday 05 February 2026 02:11:05 +0000 (0:00:01.390) 0:03:20.079 ***** 2026-02-05 02:11:12.923863 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:11:12.923869 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:11:12.923874 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:11:12.923879 | orchestrator | 2026-02-05 02:11:12.923886 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-05 02:11:12.923891 | orchestrator | Thursday 05 February 2026 02:11:07 +0000 (0:00:01.993) 0:03:22.073 ***** 2026-02-05 02:11:12.923896 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:11:12.923902 | orchestrator | 2026-02-05 02:11:12.923908 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-05 02:11:12.923914 | orchestrator | Thursday 05 February 2026 02:11:08 +0000 (0:00:01.227) 0:03:23.300 ***** 2026-02-05 02:11:12.923924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:11:12.923944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:11:12.923951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:11:12.923964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:11:13.569202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:11:13.569273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:11:13.569402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:11:13.569419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:11:13.569427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:11:13.569433 | orchestrator | 2026-02-05 02:11:13.569439 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-05 02:11:13.569444 | orchestrator | Thursday 05 February 2026 02:11:12 +0000 (0:00:04.516) 0:03:27.816 ***** 2026-02-05 02:11:13.569463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 02:11:13.569474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:11:13.569482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:11:13.569486 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:13.569492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 02:11:13.569499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:11:24.578227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:11:24.578364 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:24.578395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 02:11:24.578417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:11:24.578422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:11:24.578426 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:24.578430 | orchestrator | 2026-02-05 02:11:24.578435 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-05 02:11:24.578441 | orchestrator | Thursday 05 February 2026 02:11:13 +0000 (0:00:00.651) 0:03:28.468 ***** 2026-02-05 02:11:24.578446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 02:11:24.578453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 02:11:24.578459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 02:11:24.578477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 02:11:24.578482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 02:11:24.578486 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:24.578490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 02:11:24.578498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 02:11:24.578502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 02:11:24.578506 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:24.578510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 02:11:24.578514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 02:11:24.578521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 02:11:24.578525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 02:11:24.578529 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:24.578533 | orchestrator | 2026-02-05 02:11:24.578537 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-05 02:11:24.578541 | orchestrator | Thursday 05 February 2026 02:11:14 +0000 (0:00:00.872) 0:03:29.340 ***** 2026-02-05 02:11:24.578545 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:11:24.578549 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:11:24.578553 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:11:24.578557 | orchestrator | 2026-02-05 02:11:24.578561 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-05 02:11:24.578565 | orchestrator | Thursday 05 February 2026 02:11:16 +0000 (0:00:01.625) 0:03:30.966 ***** 2026-02-05 02:11:24.578569 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:11:24.578573 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:11:24.578577 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:11:24.578581 | orchestrator | 2026-02-05 02:11:24.578585 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-05 02:11:24.578589 | orchestrator | Thursday 05 February 2026 02:11:18 +0000 (0:00:02.172) 0:03:33.138 ***** 2026-02-05 02:11:24.578593 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:11:24.578597 | orchestrator | 2026-02-05 02:11:24.578601 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-05 02:11:24.578604 | orchestrator | Thursday 05 February 2026 02:11:19 +0000 (0:00:01.568) 0:03:34.706 ***** 2026-02-05 02:11:24.578609 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-05 02:11:24.578614 | orchestrator | 2026-02-05 02:11:24.578618 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-05 02:11:24.578622 | orchestrator | Thursday 05 February 2026 02:11:20 +0000 (0:00:00.802) 0:03:35.509 ***** 2026-02-05 02:11:24.578627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-05 02:11:24.578640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-05 02:11:36.441753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-05 02:11:36.441849 | orchestrator | 2026-02-05 02:11:36.441860 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-05 02:11:36.441869 | orchestrator | Thursday 05 February 2026 02:11:24 +0000 (0:00:03.965) 0:03:39.475 ***** 2026-02-05 02:11:36.441877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 02:11:36.441884 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:36.441906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 02:11:36.441913 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:36.441919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 02:11:36.441926 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:36.441933 | orchestrator | 2026-02-05 02:11:36.441939 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-05 02:11:36.441946 | orchestrator | Thursday 05 February 2026 02:11:25 +0000 (0:00:01.186) 0:03:40.661 ***** 2026-02-05 02:11:36.441954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 02:11:36.441964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 02:11:36.441992 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:36.441999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 02:11:36.442005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 02:11:36.442057 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:36.442065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 02:11:36.442072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 02:11:36.442092 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:36.442098 | orchestrator | 2026-02-05 02:11:36.442104 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-05 02:11:36.442111 | orchestrator | Thursday 05 February 2026 02:11:27 +0000 (0:00:01.547) 0:03:42.209 ***** 2026-02-05 02:11:36.442117 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:11:36.442123 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:11:36.442130 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:11:36.442136 | orchestrator | 2026-02-05 02:11:36.442143 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-05 02:11:36.442150 | orchestrator | Thursday 05 February 2026 02:11:29 +0000 (0:00:02.611) 0:03:44.820 ***** 2026-02-05 02:11:36.442157 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:11:36.442163 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:11:36.442170 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:11:36.442176 | orchestrator | 2026-02-05 02:11:36.442182 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-05 02:11:36.442189 | orchestrator | Thursday 05 February 2026 02:11:32 +0000 (0:00:02.642) 0:03:47.463 ***** 2026-02-05 02:11:36.442196 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-05 02:11:36.442204 | orchestrator | 2026-02-05 02:11:36.442210 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-05 02:11:36.442217 | orchestrator | Thursday 05 February 2026 02:11:33 +0000 (0:00:01.376) 0:03:48.839 ***** 2026-02-05 02:11:36.442229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 02:11:36.442237 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:36.442243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 02:11:36.442257 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:36.442264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 02:11:36.442271 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:36.442278 | orchestrator | 2026-02-05 02:11:36.442285 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-05 02:11:36.442292 | orchestrator | Thursday 05 February 2026 02:11:35 +0000 (0:00:01.178) 0:03:50.017 ***** 2026-02-05 02:11:36.442385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 02:11:36.442393 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:36.442400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 02:11:36.442412 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:58.130515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 02:11:58.130627 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:58.130642 | orchestrator | 2026-02-05 02:11:58.130652 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-05 02:11:58.130678 | orchestrator | Thursday 05 February 2026 02:11:36 +0000 (0:00:01.320) 0:03:51.338 ***** 2026-02-05 02:11:58.130689 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:58.130705 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:58.130713 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:58.130721 | orchestrator | 2026-02-05 02:11:58.130729 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-05 02:11:58.130737 | orchestrator | Thursday 05 February 2026 02:11:38 +0000 (0:00:01.684) 0:03:53.023 ***** 2026-02-05 02:11:58.130745 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:11:58.130754 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:11:58.130761 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:11:58.130769 | orchestrator | 2026-02-05 02:11:58.130776 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-05 02:11:58.130784 | orchestrator | Thursday 05 February 2026 02:11:40 +0000 (0:00:02.347) 0:03:55.371 ***** 2026-02-05 02:11:58.130816 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:11:58.130824 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:11:58.130832 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:11:58.130839 | orchestrator | 2026-02-05 02:11:58.130861 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-05 02:11:58.130870 | orchestrator | Thursday 05 February 2026 02:11:43 +0000 (0:00:02.947) 0:03:58.318 ***** 2026-02-05 02:11:58.130879 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-05 02:11:58.130887 | orchestrator | 2026-02-05 02:11:58.130897 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-05 02:11:58.130905 | orchestrator | Thursday 05 February 2026 02:11:44 +0000 (0:00:00.780) 0:03:59.099 ***** 2026-02-05 02:11:58.130914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 02:11:58.130922 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:58.130930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 02:11:58.130938 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:58.130947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 02:11:58.130955 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:58.130963 | orchestrator | 2026-02-05 02:11:58.130973 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-05 02:11:58.130983 | orchestrator | Thursday 05 February 2026 02:11:45 +0000 (0:00:01.101) 0:04:00.200 ***** 2026-02-05 02:11:58.131011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 02:11:58.131020 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:58.131030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 02:11:58.131045 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:58.131053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 02:11:58.131061 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:58.131069 | orchestrator | 2026-02-05 02:11:58.131082 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-05 02:11:58.131091 | orchestrator | Thursday 05 February 2026 02:11:46 +0000 (0:00:01.392) 0:04:01.593 ***** 2026-02-05 02:11:58.131099 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:58.131107 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:58.131114 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:11:58.131123 | orchestrator | 2026-02-05 02:11:58.131131 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-05 02:11:58.131140 | orchestrator | Thursday 05 February 2026 02:11:47 +0000 (0:00:01.130) 0:04:02.724 ***** 2026-02-05 02:11:58.131148 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:11:58.131156 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:11:58.131164 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:11:58.131171 | orchestrator | 2026-02-05 02:11:58.131178 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-05 02:11:58.131185 | orchestrator | Thursday 05 February 2026 02:11:50 +0000 (0:00:02.408) 0:04:05.132 ***** 2026-02-05 02:11:58.131193 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:11:58.131200 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:11:58.131207 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:11:58.131215 | orchestrator | 2026-02-05 02:11:58.131225 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-05 02:11:58.131234 | orchestrator | Thursday 05 February 2026 02:11:53 +0000 (0:00:03.136) 0:04:08.268 ***** 2026-02-05 02:11:58.131243 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:11:58.131250 | orchestrator | 2026-02-05 02:11:58.131258 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-05 02:11:58.131265 | orchestrator | Thursday 05 February 2026 02:11:54 +0000 (0:00:01.564) 0:04:09.833 ***** 2026-02-05 02:11:58.131275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 02:11:58.131285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 02:11:58.131408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 02:11:58.832136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 02:11:58.832239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:11:58.832251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 02:11:58.832259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 02:11:58.832267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 02:11:58.832339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 02:11:58.832363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:11:58.832371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 02:11:58.832378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 02:11:58.832385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 02:11:58.832392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 02:11:58.832411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:11:58.832418 | orchestrator | 2026-02-05 02:11:58.832458 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-05 02:11:58.832466 | orchestrator | Thursday 05 February 2026 02:11:58 +0000 (0:00:03.319) 0:04:13.152 ***** 2026-02-05 02:11:58.832479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 02:11:58.967177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 02:11:58.967257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 02:11:58.967273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 02:11:58.967282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:11:58.967358 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:11:58.967369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 02:11:58.967378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 02:11:58.967408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 02:11:58.967418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 02:11:58.967426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:11:58.967439 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:11:58.967444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 02:11:58.967449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 02:11:58.967454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 02:11:58.967467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 02:12:10.332307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 02:12:10.332413 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:10.332431 | orchestrator | 2026-02-05 02:12:10.332444 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-05 02:12:10.332456 | orchestrator | Thursday 05 February 2026 02:11:58 +0000 (0:00:00.712) 0:04:13.865 ***** 2026-02-05 02:12:10.332467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 02:12:10.332513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 02:12:10.332528 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:10.332540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 02:12:10.332552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 02:12:10.332563 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:10.332574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 02:12:10.332584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 02:12:10.332596 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:10.332607 | orchestrator | 2026-02-05 02:12:10.332618 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-05 02:12:10.332629 | orchestrator | Thursday 05 February 2026 02:12:00 +0000 (0:00:01.154) 0:04:15.019 ***** 2026-02-05 02:12:10.332640 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:12:10.332651 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:12:10.332662 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:12:10.332669 | orchestrator | 2026-02-05 02:12:10.332675 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-05 02:12:10.332682 | orchestrator | Thursday 05 February 2026 02:12:01 +0000 (0:00:01.431) 0:04:16.450 ***** 2026-02-05 02:12:10.332688 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:12:10.332694 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:12:10.332701 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:12:10.332707 | orchestrator | 2026-02-05 02:12:10.332714 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-05 02:12:10.332720 | orchestrator | Thursday 05 February 2026 02:12:03 +0000 (0:00:02.122) 0:04:18.573 ***** 2026-02-05 02:12:10.332727 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:12:10.332734 | orchestrator | 2026-02-05 02:12:10.332740 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-05 02:12:10.332746 | orchestrator | Thursday 05 February 2026 02:12:05 +0000 (0:00:01.349) 0:04:19.922 ***** 2026-02-05 02:12:10.332767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:12:10.332794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:12:10.332809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:12:10.332817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:12:10.332828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:12:10.332842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:12:11.897056 | orchestrator | 2026-02-05 02:12:11.897145 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-05 02:12:11.897157 | orchestrator | Thursday 05 February 2026 02:12:10 +0000 (0:00:05.301) 0:04:25.223 ***** 2026-02-05 02:12:11.897167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 02:12:11.897180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 02:12:11.897189 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:11.897225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 02:12:11.897234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 02:12:11.897274 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:11.897434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 02:12:11.897448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 02:12:11.897462 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:11.897476 | orchestrator | 2026-02-05 02:12:11.897489 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-05 02:12:11.897503 | orchestrator | Thursday 05 February 2026 02:12:10 +0000 (0:00:00.636) 0:04:25.859 ***** 2026-02-05 02:12:11.897517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-05 02:12:11.897532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 02:12:11.897549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 02:12:11.897578 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:11.897601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-05 02:12:11.897611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 02:12:11.897620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 02:12:11.897629 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:11.897638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-05 02:12:11.897647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 02:12:11.897670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 02:12:18.090367 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:18.090464 | orchestrator | 2026-02-05 02:12:18.090476 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-05 02:12:18.090486 | orchestrator | Thursday 05 February 2026 02:12:11 +0000 (0:00:00.934) 0:04:26.794 ***** 2026-02-05 02:12:18.090494 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:18.090502 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:18.090507 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:18.090511 | orchestrator | 2026-02-05 02:12:18.090515 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-05 02:12:18.090520 | orchestrator | Thursday 05 February 2026 02:12:12 +0000 (0:00:01.052) 0:04:27.846 ***** 2026-02-05 02:12:18.090524 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:18.090528 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:18.090532 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:18.090536 | orchestrator | 2026-02-05 02:12:18.090540 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-05 02:12:18.090544 | orchestrator | Thursday 05 February 2026 02:12:13 +0000 (0:00:01.054) 0:04:28.901 ***** 2026-02-05 02:12:18.090548 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:12:18.090553 | orchestrator | 2026-02-05 02:12:18.090556 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-05 02:12:18.090560 | orchestrator | Thursday 05 February 2026 02:12:15 +0000 (0:00:01.681) 0:04:30.583 ***** 2026-02-05 02:12:18.090567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 02:12:18.090594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 02:12:18.090610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:18.090615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:18.090622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 02:12:18.090641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 02:12:18.090652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 02:12:18.090659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:18.090672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:18.090679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 02:12:18.090689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 02:12:18.090695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 02:12:18.090708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:19.639688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:19.639771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 02:12:19.639802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 02:12:19.639825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 02:12:19.639832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:19.639838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:19.639857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 02:12:19.639869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 02:12:19.639875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 02:12:19.639885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 02:12:19.639892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:19.639904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 02:12:20.400914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:20.400995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 02:12:20.401004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:20.401025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:20.401032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 02:12:20.401038 | orchestrator | 2026-02-05 02:12:20.401047 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-05 02:12:20.401053 | orchestrator | Thursday 05 February 2026 02:12:19 +0000 (0:00:04.130) 0:04:34.713 ***** 2026-02-05 02:12:20.401061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 02:12:20.401068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 02:12:20.401105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:20.401112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:20.401121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 02:12:20.401134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 02:12:20.401146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 02:12:20.401157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:20.401178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 02:12:20.871939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:20.872076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 02:12:20.872096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 02:12:20.872109 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:20.872124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:20.872137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:20.872152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 02:12:20.872214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 02:12:20.872231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 02:12:20.872252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:20.872265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:20.872278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 02:12:20.872361 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:20.872376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 02:12:20.872401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 02:12:20.872425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:22.771941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:22.772096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 02:12:22.772129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 02:12:22.772153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 02:12:22.772199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:22.772213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 02:12:22.772245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 02:12:22.772259 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:22.772272 | orchestrator | 2026-02-05 02:12:22.772361 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-05 02:12:22.772384 | orchestrator | Thursday 05 February 2026 02:12:21 +0000 (0:00:01.498) 0:04:36.212 ***** 2026-02-05 02:12:22.772413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-05 02:12:22.772440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-05 02:12:22.772464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 02:12:22.772484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 02:12:22.772498 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:22.772512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-05 02:12:22.772535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-05 02:12:22.772548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 02:12:22.772563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 02:12:22.772576 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:22.772588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-05 02:12:22.772602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-05 02:12:22.772615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 02:12:22.772629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 02:12:22.772642 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:22.772655 | orchestrator | 2026-02-05 02:12:22.772668 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-05 02:12:22.772681 | orchestrator | Thursday 05 February 2026 02:12:22 +0000 (0:00:00.981) 0:04:37.194 ***** 2026-02-05 02:12:22.772694 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:22.772715 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:31.376360 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:31.376469 | orchestrator | 2026-02-05 02:12:31.376483 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-05 02:12:31.376497 | orchestrator | Thursday 05 February 2026 02:12:22 +0000 (0:00:00.478) 0:04:37.673 ***** 2026-02-05 02:12:31.376511 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:31.376526 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:31.376539 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:31.376552 | orchestrator | 2026-02-05 02:12:31.376565 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-05 02:12:31.376578 | orchestrator | Thursday 05 February 2026 02:12:24 +0000 (0:00:01.322) 0:04:38.995 ***** 2026-02-05 02:12:31.376591 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:12:31.376605 | orchestrator | 2026-02-05 02:12:31.376619 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-05 02:12:31.376632 | orchestrator | Thursday 05 February 2026 02:12:25 +0000 (0:00:01.794) 0:04:40.789 ***** 2026-02-05 02:12:31.376649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 02:12:31.376699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 02:12:31.376715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 02:12:31.376730 | orchestrator | 2026-02-05 02:12:31.376792 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-05 02:12:31.376809 | orchestrator | Thursday 05 February 2026 02:12:28 +0000 (0:00:02.400) 0:04:43.190 ***** 2026-02-05 02:12:31.376849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 02:12:31.376881 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:31.376897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 02:12:31.376912 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:31.376927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 02:12:31.377002 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:31.377021 | orchestrator | 2026-02-05 02:12:31.377036 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-05 02:12:31.377050 | orchestrator | Thursday 05 February 2026 02:12:28 +0000 (0:00:00.417) 0:04:43.608 ***** 2026-02-05 02:12:31.377065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-05 02:12:31.377079 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:31.377093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-05 02:12:31.377106 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:31.377120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-05 02:12:31.377135 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:31.377149 | orchestrator | 2026-02-05 02:12:31.377164 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-05 02:12:31.377179 | orchestrator | Thursday 05 February 2026 02:12:29 +0000 (0:00:00.901) 0:04:44.510 ***** 2026-02-05 02:12:31.377193 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:31.377209 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:31.377224 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:31.377240 | orchestrator | 2026-02-05 02:12:31.377254 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-05 02:12:31.377265 | orchestrator | Thursday 05 February 2026 02:12:30 +0000 (0:00:00.452) 0:04:44.962 ***** 2026-02-05 02:12:31.377342 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:39.443597 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:39.443683 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:39.443692 | orchestrator | 2026-02-05 02:12:39.443700 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-05 02:12:39.443711 | orchestrator | Thursday 05 February 2026 02:12:31 +0000 (0:00:01.311) 0:04:46.274 ***** 2026-02-05 02:12:39.443721 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:12:39.443731 | orchestrator | 2026-02-05 02:12:39.443742 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-05 02:12:39.443752 | orchestrator | Thursday 05 February 2026 02:12:32 +0000 (0:00:01.474) 0:04:47.749 ***** 2026-02-05 02:12:39.443780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 02:12:39.443797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 02:12:39.443807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 02:12:39.443833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 02:12:39.443869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 02:12:39.443880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 02:12:39.443890 | orchestrator | 2026-02-05 02:12:39.443900 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-05 02:12:39.443910 | orchestrator | Thursday 05 February 2026 02:12:38 +0000 (0:00:05.969) 0:04:53.719 ***** 2026-02-05 02:12:39.443919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 02:12:39.443930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 02:12:39.443953 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:45.295939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 02:12:45.296037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 02:12:45.296050 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:45.296057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 02:12:45.296062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 02:12:45.296082 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:45.296087 | orchestrator | 2026-02-05 02:12:45.296092 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-05 02:12:45.296097 | orchestrator | Thursday 05 February 2026 02:12:39 +0000 (0:00:00.619) 0:04:54.339 ***** 2026-02-05 02:12:45.296116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 02:12:45.296123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 02:12:45.296129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 02:12:45.296138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 02:12:45.296142 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:45.296147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 02:12:45.296151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 02:12:45.296155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 02:12:45.296159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 02:12:45.296163 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:45.296167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 02:12:45.296171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 02:12:45.296175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 02:12:45.296179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 02:12:45.296183 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:45.296187 | orchestrator | 2026-02-05 02:12:45.296194 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-05 02:12:45.296198 | orchestrator | Thursday 05 February 2026 02:12:40 +0000 (0:00:00.907) 0:04:55.246 ***** 2026-02-05 02:12:45.296203 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:12:45.296207 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:12:45.296211 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:12:45.296215 | orchestrator | 2026-02-05 02:12:45.296218 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-05 02:12:45.296222 | orchestrator | Thursday 05 February 2026 02:12:41 +0000 (0:00:01.613) 0:04:56.860 ***** 2026-02-05 02:12:45.296226 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:12:45.296230 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:12:45.296234 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:12:45.296238 | orchestrator | 2026-02-05 02:12:45.296242 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-05 02:12:45.296246 | orchestrator | Thursday 05 February 2026 02:12:44 +0000 (0:00:02.117) 0:04:58.977 ***** 2026-02-05 02:12:45.296251 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:45.296258 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:45.296264 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:45.296270 | orchestrator | 2026-02-05 02:12:45.296329 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-05 02:12:45.296335 | orchestrator | Thursday 05 February 2026 02:12:44 +0000 (0:00:00.605) 0:04:59.582 ***** 2026-02-05 02:12:45.296357 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:45.296364 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:45.296371 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:12:45.296377 | orchestrator | 2026-02-05 02:12:45.296382 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-05 02:12:45.296389 | orchestrator | Thursday 05 February 2026 02:12:44 +0000 (0:00:00.312) 0:04:59.895 ***** 2026-02-05 02:12:45.296396 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:12:45.296400 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:12:45.296409 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:13:29.010686 | orchestrator | 2026-02-05 02:13:29.010765 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-05 02:13:29.010777 | orchestrator | Thursday 05 February 2026 02:12:45 +0000 (0:00:00.303) 0:05:00.198 ***** 2026-02-05 02:13:29.010785 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:13:29.010803 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:13:29.010811 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:13:29.010817 | orchestrator | 2026-02-05 02:13:29.010824 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-05 02:13:29.010830 | orchestrator | Thursday 05 February 2026 02:12:45 +0000 (0:00:00.333) 0:05:00.532 ***** 2026-02-05 02:13:29.010837 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:13:29.010843 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:13:29.010847 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:13:29.010851 | orchestrator | 2026-02-05 02:13:29.010855 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-05 02:13:29.010871 | orchestrator | Thursday 05 February 2026 02:12:46 +0000 (0:00:00.626) 0:05:01.158 ***** 2026-02-05 02:13:29.010876 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:13:29.010880 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:13:29.010884 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:13:29.010890 | orchestrator | 2026-02-05 02:13:29.010897 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-05 02:13:29.010903 | orchestrator | Thursday 05 February 2026 02:12:46 +0000 (0:00:00.581) 0:05:01.740 ***** 2026-02-05 02:13:29.010909 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:13:29.010916 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:13:29.010923 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:13:29.010929 | orchestrator | 2026-02-05 02:13:29.010935 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-05 02:13:29.010961 | orchestrator | Thursday 05 February 2026 02:12:47 +0000 (0:00:00.688) 0:05:02.429 ***** 2026-02-05 02:13:29.010966 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:13:29.010970 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:13:29.010974 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:13:29.010978 | orchestrator | 2026-02-05 02:13:29.010981 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-05 02:13:29.010985 | orchestrator | Thursday 05 February 2026 02:12:47 +0000 (0:00:00.356) 0:05:02.785 ***** 2026-02-05 02:13:29.010989 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:13:29.010993 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:13:29.010996 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:13:29.011000 | orchestrator | 2026-02-05 02:13:29.011004 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-05 02:13:29.011008 | orchestrator | Thursday 05 February 2026 02:12:49 +0000 (0:00:01.249) 0:05:04.034 ***** 2026-02-05 02:13:29.011012 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:13:29.011015 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:13:29.011019 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:13:29.011023 | orchestrator | 2026-02-05 02:13:29.011027 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-05 02:13:29.011030 | orchestrator | Thursday 05 February 2026 02:12:50 +0000 (0:00:00.898) 0:05:04.933 ***** 2026-02-05 02:13:29.011034 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:13:29.011038 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:13:29.011042 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:13:29.011046 | orchestrator | 2026-02-05 02:13:29.011049 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-05 02:13:29.011053 | orchestrator | Thursday 05 February 2026 02:12:50 +0000 (0:00:00.913) 0:05:05.847 ***** 2026-02-05 02:13:29.011057 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:13:29.011061 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:13:29.011065 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:13:29.011068 | orchestrator | 2026-02-05 02:13:29.011072 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-05 02:13:29.011076 | orchestrator | Thursday 05 February 2026 02:13:00 +0000 (0:00:09.191) 0:05:15.039 ***** 2026-02-05 02:13:29.011080 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:13:29.011087 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:13:29.011093 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:13:29.011099 | orchestrator | 2026-02-05 02:13:29.011105 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-05 02:13:29.011111 | orchestrator | Thursday 05 February 2026 02:13:01 +0000 (0:00:01.141) 0:05:16.180 ***** 2026-02-05 02:13:29.011118 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:13:29.011123 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:13:29.011127 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:13:29.011131 | orchestrator | 2026-02-05 02:13:29.011135 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-05 02:13:29.011139 | orchestrator | Thursday 05 February 2026 02:13:14 +0000 (0:00:12.818) 0:05:28.998 ***** 2026-02-05 02:13:29.011143 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:13:29.011146 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:13:29.011150 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:13:29.011154 | orchestrator | 2026-02-05 02:13:29.011158 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-05 02:13:29.011162 | orchestrator | Thursday 05 February 2026 02:13:14 +0000 (0:00:00.733) 0:05:29.732 ***** 2026-02-05 02:13:29.011165 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:13:29.011169 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:13:29.011173 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:13:29.011177 | orchestrator | 2026-02-05 02:13:29.011180 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-05 02:13:29.011184 | orchestrator | Thursday 05 February 2026 02:13:23 +0000 (0:00:08.850) 0:05:38.582 ***** 2026-02-05 02:13:29.011194 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:13:29.011198 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:13:29.011202 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:13:29.011206 | orchestrator | 2026-02-05 02:13:29.011210 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-05 02:13:29.011215 | orchestrator | Thursday 05 February 2026 02:13:24 +0000 (0:00:00.647) 0:05:39.230 ***** 2026-02-05 02:13:29.011219 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:13:29.011224 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:13:29.011228 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:13:29.011233 | orchestrator | 2026-02-05 02:13:29.011249 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-05 02:13:29.011254 | orchestrator | Thursday 05 February 2026 02:13:24 +0000 (0:00:00.353) 0:05:39.583 ***** 2026-02-05 02:13:29.011280 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:13:29.011285 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:13:29.011290 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:13:29.011294 | orchestrator | 2026-02-05 02:13:29.011299 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-05 02:13:29.011303 | orchestrator | Thursday 05 February 2026 02:13:24 +0000 (0:00:00.321) 0:05:39.905 ***** 2026-02-05 02:13:29.011308 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:13:29.011312 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:13:29.011317 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:13:29.011321 | orchestrator | 2026-02-05 02:13:29.011325 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-05 02:13:29.011330 | orchestrator | Thursday 05 February 2026 02:13:25 +0000 (0:00:00.343) 0:05:40.248 ***** 2026-02-05 02:13:29.011334 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:13:29.011343 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:13:29.011348 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:13:29.011352 | orchestrator | 2026-02-05 02:13:29.011356 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-05 02:13:29.011361 | orchestrator | Thursday 05 February 2026 02:13:25 +0000 (0:00:00.359) 0:05:40.607 ***** 2026-02-05 02:13:29.011365 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:13:29.011370 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:13:29.011374 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:13:29.011379 | orchestrator | 2026-02-05 02:13:29.011383 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-05 02:13:29.011388 | orchestrator | Thursday 05 February 2026 02:13:26 +0000 (0:00:00.658) 0:05:41.266 ***** 2026-02-05 02:13:29.011393 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:13:29.011397 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:13:29.011402 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:13:29.011406 | orchestrator | 2026-02-05 02:13:29.011411 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-05 02:13:29.011415 | orchestrator | Thursday 05 February 2026 02:13:27 +0000 (0:00:01.023) 0:05:42.290 ***** 2026-02-05 02:13:29.011418 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:13:29.011422 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:13:29.011426 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:13:29.011429 | orchestrator | 2026-02-05 02:13:29.011433 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:13:29.011438 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-05 02:13:29.011444 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-05 02:13:29.011447 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-05 02:13:29.011451 | orchestrator | 2026-02-05 02:13:29.011459 | orchestrator | 2026-02-05 02:13:29.011463 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:13:29.011467 | orchestrator | Thursday 05 February 2026 02:13:28 +0000 (0:00:00.829) 0:05:43.120 ***** 2026-02-05 02:13:29.011470 | orchestrator | =============================================================================== 2026-02-05 02:13:29.011474 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.82s 2026-02-05 02:13:29.011478 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.19s 2026-02-05 02:13:29.011482 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.85s 2026-02-05 02:13:29.011486 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.97s 2026-02-05 02:13:29.011489 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.30s 2026-02-05 02:13:29.011493 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.52s 2026-02-05 02:13:29.011498 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.28s 2026-02-05 02:13:29.011504 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.13s 2026-02-05 02:13:29.011510 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.03s 2026-02-05 02:13:29.011515 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.97s 2026-02-05 02:13:29.011525 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.67s 2026-02-05 02:13:29.011533 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.66s 2026-02-05 02:13:29.011539 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.43s 2026-02-05 02:13:29.011544 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.33s 2026-02-05 02:13:29.011550 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.32s 2026-02-05 02:13:29.011556 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.31s 2026-02-05 02:13:29.011562 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.25s 2026-02-05 02:13:29.011568 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.22s 2026-02-05 02:13:29.011574 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.21s 2026-02-05 02:13:29.011580 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.19s 2026-02-05 02:13:31.331178 | orchestrator | 2026-02-05 02:13:31 | INFO  | Task c22aa7a3-b69c-4bb4-81c7-b6c5fc53a5a8 (opensearch) was prepared for execution. 2026-02-05 02:13:31.331317 | orchestrator | 2026-02-05 02:13:31 | INFO  | It takes a moment until task c22aa7a3-b69c-4bb4-81c7-b6c5fc53a5a8 (opensearch) has been started and output is visible here. 2026-02-05 02:13:42.938659 | orchestrator | 2026-02-05 02:13:42.938743 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 02:13:42.938752 | orchestrator | 2026-02-05 02:13:42.938759 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 02:13:42.938765 | orchestrator | Thursday 05 February 2026 02:13:35 +0000 (0:00:00.260) 0:00:00.260 ***** 2026-02-05 02:13:42.938771 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:13:42.938777 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:13:42.938783 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:13:42.938788 | orchestrator | 2026-02-05 02:13:42.938794 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 02:13:42.938799 | orchestrator | Thursday 05 February 2026 02:13:35 +0000 (0:00:00.297) 0:00:00.558 ***** 2026-02-05 02:13:42.938818 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-05 02:13:42.938824 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-05 02:13:42.938829 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-05 02:13:42.938835 | orchestrator | 2026-02-05 02:13:42.938840 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-05 02:13:42.938863 | orchestrator | 2026-02-05 02:13:42.938869 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 02:13:42.938874 | orchestrator | Thursday 05 February 2026 02:13:36 +0000 (0:00:00.426) 0:00:00.985 ***** 2026-02-05 02:13:42.938881 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:13:42.938887 | orchestrator | 2026-02-05 02:13:42.938892 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-05 02:13:42.938897 | orchestrator | Thursday 05 February 2026 02:13:36 +0000 (0:00:00.466) 0:00:01.451 ***** 2026-02-05 02:13:42.938903 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 02:13:42.938908 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 02:13:42.938914 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 02:13:42.938920 | orchestrator | 2026-02-05 02:13:42.938925 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-05 02:13:42.938930 | orchestrator | Thursday 05 February 2026 02:13:38 +0000 (0:00:01.694) 0:00:03.146 ***** 2026-02-05 02:13:42.938937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:13:42.938948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:13:42.938965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:13:42.938977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:13:42.938989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:13:42.938996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:13:42.939002 | orchestrator | 2026-02-05 02:13:42.939008 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 02:13:42.939013 | orchestrator | Thursday 05 February 2026 02:13:40 +0000 (0:00:01.665) 0:00:04.812 ***** 2026-02-05 02:13:42.939019 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:13:42.939024 | orchestrator | 2026-02-05 02:13:42.939029 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-05 02:13:42.939035 | orchestrator | Thursday 05 February 2026 02:13:40 +0000 (0:00:00.513) 0:00:05.325 ***** 2026-02-05 02:13:42.939048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:13:43.727360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:13:43.727441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:13:43.727452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:13:43.727462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:13:43.727514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:13:43.727523 | orchestrator | 2026-02-05 02:13:43.727531 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-05 02:13:43.727539 | orchestrator | Thursday 05 February 2026 02:13:42 +0000 (0:00:02.397) 0:00:07.723 ***** 2026-02-05 02:13:43.727546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 02:13:43.727553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 02:13:43.727560 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:13:43.727568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 02:13:43.727593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 02:13:44.759594 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:13:44.759689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 02:13:44.759706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 02:13:44.759718 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:13:44.759727 | orchestrator | 2026-02-05 02:13:44.759737 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-05 02:13:44.759748 | orchestrator | Thursday 05 February 2026 02:13:43 +0000 (0:00:00.788) 0:00:08.511 ***** 2026-02-05 02:13:44.759782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 02:13:44.759808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 02:13:44.759836 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:13:44.759847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 02:13:44.759857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 02:13:44.759867 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:13:44.759885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 02:13:44.759910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 02:13:44.759918 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:13:44.759924 | orchestrator | 2026-02-05 02:13:44.759930 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-05 02:13:44.759965 | orchestrator | Thursday 05 February 2026 02:13:44 +0000 (0:00:01.025) 0:00:09.537 ***** 2026-02-05 02:13:53.136820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:13:53.136919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:13:53.136932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:13:53.137000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:13:53.137028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:13:53.137038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:13:53.137054 | orchestrator | 2026-02-05 02:13:53.137063 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-05 02:13:53.137072 | orchestrator | Thursday 05 February 2026 02:13:47 +0000 (0:00:02.365) 0:00:11.902 ***** 2026-02-05 02:13:53.137080 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:13:53.137089 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:13:53.137096 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:13:53.137103 | orchestrator | 2026-02-05 02:13:53.137111 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-05 02:13:53.137118 | orchestrator | Thursday 05 February 2026 02:13:49 +0000 (0:00:02.389) 0:00:14.292 ***** 2026-02-05 02:13:53.137126 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:13:53.137133 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:13:53.137140 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:13:53.137147 | orchestrator | 2026-02-05 02:13:53.137155 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-05 02:13:53.137162 | orchestrator | Thursday 05 February 2026 02:13:51 +0000 (0:00:01.761) 0:00:16.054 ***** 2026-02-05 02:13:53.137170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:13:53.137183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:13:53.137198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 02:16:11.646413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:16:11.646521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:16:11.646545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 02:16:11.646552 | orchestrator | 2026-02-05 02:16:11.646560 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 02:16:11.646566 | orchestrator | Thursday 05 February 2026 02:13:53 +0000 (0:00:01.866) 0:00:17.920 ***** 2026-02-05 02:16:11.646572 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:16:11.646579 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:16:11.646585 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:16:11.646590 | orchestrator | 2026-02-05 02:16:11.646596 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-05 02:16:11.646602 | orchestrator | Thursday 05 February 2026 02:13:53 +0000 (0:00:00.291) 0:00:18.211 ***** 2026-02-05 02:16:11.646607 | orchestrator | 2026-02-05 02:16:11.646613 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-05 02:16:11.646618 | orchestrator | Thursday 05 February 2026 02:13:53 +0000 (0:00:00.066) 0:00:18.278 ***** 2026-02-05 02:16:11.646623 | orchestrator | 2026-02-05 02:16:11.646629 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-05 02:16:11.646639 | orchestrator | Thursday 05 February 2026 02:13:53 +0000 (0:00:00.060) 0:00:18.338 ***** 2026-02-05 02:16:11.646644 | orchestrator | 2026-02-05 02:16:11.646650 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-05 02:16:11.646667 | orchestrator | Thursday 05 February 2026 02:13:53 +0000 (0:00:00.066) 0:00:18.405 ***** 2026-02-05 02:16:11.646673 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:16:11.646678 | orchestrator | 2026-02-05 02:16:11.646684 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-05 02:16:11.646689 | orchestrator | Thursday 05 February 2026 02:13:53 +0000 (0:00:00.203) 0:00:18.608 ***** 2026-02-05 02:16:11.646694 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:16:11.646700 | orchestrator | 2026-02-05 02:16:11.646705 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-05 02:16:11.646710 | orchestrator | Thursday 05 February 2026 02:13:54 +0000 (0:00:00.210) 0:00:18.819 ***** 2026-02-05 02:16:11.646716 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:16:11.646721 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:16:11.646726 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:16:11.646732 | orchestrator | 2026-02-05 02:16:11.646737 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-05 02:16:11.646742 | orchestrator | Thursday 05 February 2026 02:14:49 +0000 (0:00:55.572) 0:01:14.391 ***** 2026-02-05 02:16:11.646748 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:16:11.646753 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:16:11.646758 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:16:11.646764 | orchestrator | 2026-02-05 02:16:11.646769 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 02:16:11.646775 | orchestrator | Thursday 05 February 2026 02:16:00 +0000 (0:01:10.444) 0:02:24.835 ***** 2026-02-05 02:16:11.646780 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:16:11.646786 | orchestrator | 2026-02-05 02:16:11.646791 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-05 02:16:11.646797 | orchestrator | Thursday 05 February 2026 02:16:00 +0000 (0:00:00.533) 0:02:25.368 ***** 2026-02-05 02:16:11.646802 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:16:11.646808 | orchestrator | 2026-02-05 02:16:11.646814 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-05 02:16:11.646819 | orchestrator | Thursday 05 February 2026 02:16:03 +0000 (0:00:02.674) 0:02:28.043 ***** 2026-02-05 02:16:11.646825 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:16:11.646833 | orchestrator | 2026-02-05 02:16:11.646842 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-05 02:16:11.646851 | orchestrator | Thursday 05 February 2026 02:16:06 +0000 (0:00:02.805) 0:02:30.849 ***** 2026-02-05 02:16:11.646859 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:16:11.646867 | orchestrator | 2026-02-05 02:16:11.646875 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-05 02:16:11.646884 | orchestrator | Thursday 05 February 2026 02:16:08 +0000 (0:00:02.914) 0:02:33.764 ***** 2026-02-05 02:16:11.646892 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:16:11.646900 | orchestrator | 2026-02-05 02:16:11.646908 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:16:11.646918 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 02:16:11.646928 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 02:16:11.646944 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 02:16:11.646953 | orchestrator | 2026-02-05 02:16:11.646962 | orchestrator | 2026-02-05 02:16:11.646979 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:16:11.646988 | orchestrator | Thursday 05 February 2026 02:16:11 +0000 (0:00:02.654) 0:02:36.419 ***** 2026-02-05 02:16:11.646998 | orchestrator | =============================================================================== 2026-02-05 02:16:11.647008 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 70.44s 2026-02-05 02:16:11.647018 | orchestrator | opensearch : Restart opensearch container ------------------------------ 55.57s 2026-02-05 02:16:11.647025 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.91s 2026-02-05 02:16:11.647031 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.81s 2026-02-05 02:16:11.647038 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.67s 2026-02-05 02:16:11.647044 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.65s 2026-02-05 02:16:11.647050 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.40s 2026-02-05 02:16:11.647057 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.39s 2026-02-05 02:16:11.647063 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.37s 2026-02-05 02:16:11.647069 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.87s 2026-02-05 02:16:11.647076 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.76s 2026-02-05 02:16:11.647083 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.70s 2026-02-05 02:16:11.647089 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.67s 2026-02-05 02:16:11.647096 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.03s 2026-02-05 02:16:11.647102 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.79s 2026-02-05 02:16:11.647108 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-02-05 02:16:11.647119 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-02-05 02:16:11.961291 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2026-02-05 02:16:11.961449 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-02-05 02:16:11.961468 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-02-05 02:16:14.226658 | orchestrator | 2026-02-05 02:16:14 | INFO  | Task 0bd5faf1-8dd6-4dcd-bb93-7aa54d33e53c (memcached) was prepared for execution. 2026-02-05 02:16:14.226736 | orchestrator | 2026-02-05 02:16:14 | INFO  | It takes a moment until task 0bd5faf1-8dd6-4dcd-bb93-7aa54d33e53c (memcached) has been started and output is visible here. 2026-02-05 02:16:30.008364 | orchestrator | 2026-02-05 02:16:30.008479 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 02:16:30.008499 | orchestrator | 2026-02-05 02:16:30.008512 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 02:16:30.008525 | orchestrator | Thursday 05 February 2026 02:16:18 +0000 (0:00:00.187) 0:00:00.187 ***** 2026-02-05 02:16:30.008537 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:16:30.008551 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:16:30.008563 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:16:30.008575 | orchestrator | 2026-02-05 02:16:30.008586 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 02:16:30.008597 | orchestrator | Thursday 05 February 2026 02:16:18 +0000 (0:00:00.244) 0:00:00.432 ***** 2026-02-05 02:16:30.008610 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-05 02:16:30.008623 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-05 02:16:30.008635 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-05 02:16:30.008648 | orchestrator | 2026-02-05 02:16:30.008660 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-05 02:16:30.008702 | orchestrator | 2026-02-05 02:16:30.008716 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-05 02:16:30.008729 | orchestrator | Thursday 05 February 2026 02:16:18 +0000 (0:00:00.329) 0:00:00.761 ***** 2026-02-05 02:16:30.008741 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:16:30.008754 | orchestrator | 2026-02-05 02:16:30.008766 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-05 02:16:30.008777 | orchestrator | Thursday 05 February 2026 02:16:19 +0000 (0:00:00.436) 0:00:01.198 ***** 2026-02-05 02:16:30.008788 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-05 02:16:30.008800 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-05 02:16:30.008812 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-05 02:16:30.008823 | orchestrator | 2026-02-05 02:16:30.008835 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-05 02:16:30.008847 | orchestrator | Thursday 05 February 2026 02:16:19 +0000 (0:00:00.625) 0:00:01.823 ***** 2026-02-05 02:16:30.008859 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-05 02:16:30.008871 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-05 02:16:30.008882 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-05 02:16:30.008894 | orchestrator | 2026-02-05 02:16:30.008906 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-05 02:16:30.008917 | orchestrator | Thursday 05 February 2026 02:16:21 +0000 (0:00:01.496) 0:00:03.319 ***** 2026-02-05 02:16:30.008943 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:16:30.008955 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:16:30.008967 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:16:30.008977 | orchestrator | 2026-02-05 02:16:30.008989 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-05 02:16:30.008999 | orchestrator | Thursday 05 February 2026 02:16:22 +0000 (0:00:01.423) 0:00:04.743 ***** 2026-02-05 02:16:30.009011 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:16:30.009021 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:16:30.009033 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:16:30.009044 | orchestrator | 2026-02-05 02:16:30.009055 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:16:30.009067 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:16:30.009080 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:16:30.009092 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:16:30.009104 | orchestrator | 2026-02-05 02:16:30.009117 | orchestrator | 2026-02-05 02:16:30.009129 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:16:30.009141 | orchestrator | Thursday 05 February 2026 02:16:29 +0000 (0:00:07.000) 0:00:11.744 ***** 2026-02-05 02:16:30.009153 | orchestrator | =============================================================================== 2026-02-05 02:16:30.009165 | orchestrator | memcached : Restart memcached container --------------------------------- 7.00s 2026-02-05 02:16:30.009178 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.50s 2026-02-05 02:16:30.009190 | orchestrator | memcached : Check memcached container ----------------------------------- 1.42s 2026-02-05 02:16:30.009242 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.63s 2026-02-05 02:16:30.009254 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.44s 2026-02-05 02:16:30.009266 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2026-02-05 02:16:30.009279 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.24s 2026-02-05 02:16:32.357895 | orchestrator | 2026-02-05 02:16:32 | INFO  | Task 360f38f0-7bb7-4925-8e75-28de2857f2c4 (redis) was prepared for execution. 2026-02-05 02:16:32.357979 | orchestrator | 2026-02-05 02:16:32 | INFO  | It takes a moment until task 360f38f0-7bb7-4925-8e75-28de2857f2c4 (redis) has been started and output is visible here. 2026-02-05 02:16:41.212973 | orchestrator | 2026-02-05 02:16:41.213080 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 02:16:41.213096 | orchestrator | 2026-02-05 02:16:41.213107 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 02:16:41.213118 | orchestrator | Thursday 05 February 2026 02:16:36 +0000 (0:00:00.256) 0:00:00.256 ***** 2026-02-05 02:16:41.213128 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:16:41.213139 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:16:41.213149 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:16:41.213158 | orchestrator | 2026-02-05 02:16:41.213168 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 02:16:41.213178 | orchestrator | Thursday 05 February 2026 02:16:36 +0000 (0:00:00.287) 0:00:00.543 ***** 2026-02-05 02:16:41.213311 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-05 02:16:41.213324 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-05 02:16:41.213334 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-05 02:16:41.213344 | orchestrator | 2026-02-05 02:16:41.213354 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-05 02:16:41.213364 | orchestrator | 2026-02-05 02:16:41.213374 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-05 02:16:41.213384 | orchestrator | Thursday 05 February 2026 02:16:37 +0000 (0:00:00.410) 0:00:00.954 ***** 2026-02-05 02:16:41.213393 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:16:41.213404 | orchestrator | 2026-02-05 02:16:41.213414 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-05 02:16:41.213423 | orchestrator | Thursday 05 February 2026 02:16:37 +0000 (0:00:00.457) 0:00:01.411 ***** 2026-02-05 02:16:41.213436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 02:16:41.213452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 02:16:41.213464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 02:16:41.213501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 02:16:41.213534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 02:16:41.213547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 02:16:41.213558 | orchestrator | 2026-02-05 02:16:41.213571 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-05 02:16:41.213582 | orchestrator | Thursday 05 February 2026 02:16:38 +0000 (0:00:01.108) 0:00:02.519 ***** 2026-02-05 02:16:41.213594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 02:16:41.213699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 02:16:41.213721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 02:16:41.213742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 02:16:41.213763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352510 | orchestrator | 2026-02-05 02:16:45.352516 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-05 02:16:45.352521 | orchestrator | Thursday 05 February 2026 02:16:41 +0000 (0:00:02.442) 0:00:04.962 ***** 2026-02-05 02:16:45.352527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352617 | orchestrator | 2026-02-05 02:16:45.352624 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-05 02:16:45.352630 | orchestrator | Thursday 05 February 2026 02:16:43 +0000 (0:00:02.393) 0:00:07.355 ***** 2026-02-05 02:16:45.352637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 02:16:45.352679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 02:17:01.005928 | orchestrator | 2026-02-05 02:17:01.006084 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-05 02:17:01.006100 | orchestrator | Thursday 05 February 2026 02:16:45 +0000 (0:00:01.549) 0:00:08.905 ***** 2026-02-05 02:17:01.006110 | orchestrator | 2026-02-05 02:17:01.006119 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-05 02:17:01.006129 | orchestrator | Thursday 05 February 2026 02:16:45 +0000 (0:00:00.066) 0:00:08.971 ***** 2026-02-05 02:17:01.006138 | orchestrator | 2026-02-05 02:17:01.006147 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-05 02:17:01.006156 | orchestrator | Thursday 05 February 2026 02:16:45 +0000 (0:00:00.063) 0:00:09.034 ***** 2026-02-05 02:17:01.006165 | orchestrator | 2026-02-05 02:17:01.006174 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-05 02:17:01.006183 | orchestrator | Thursday 05 February 2026 02:16:45 +0000 (0:00:00.062) 0:00:09.097 ***** 2026-02-05 02:17:01.006282 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:17:01.006349 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:17:01.006359 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:17:01.006368 | orchestrator | 2026-02-05 02:17:01.006377 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-05 02:17:01.006393 | orchestrator | Thursday 05 February 2026 02:16:52 +0000 (0:00:07.511) 0:00:16.609 ***** 2026-02-05 02:17:01.006445 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:17:01.006465 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:17:01.006480 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:17:01.006495 | orchestrator | 2026-02-05 02:17:01.006509 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:17:01.006527 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:17:01.006544 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:17:01.006578 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:17:01.006594 | orchestrator | 2026-02-05 02:17:01.006610 | orchestrator | 2026-02-05 02:17:01.006622 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:17:01.006631 | orchestrator | Thursday 05 February 2026 02:17:00 +0000 (0:00:07.821) 0:00:24.430 ***** 2026-02-05 02:17:01.006640 | orchestrator | =============================================================================== 2026-02-05 02:17:01.006649 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.82s 2026-02-05 02:17:01.006657 | orchestrator | redis : Restart redis container ----------------------------------------- 7.51s 2026-02-05 02:17:01.006666 | orchestrator | redis : Copying over default config.json files -------------------------- 2.44s 2026-02-05 02:17:01.006675 | orchestrator | redis : Copying over redis config files --------------------------------- 2.39s 2026-02-05 02:17:01.006683 | orchestrator | redis : Check redis containers ------------------------------------------ 1.55s 2026-02-05 02:17:01.006692 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.11s 2026-02-05 02:17:01.006700 | orchestrator | redis : include_tasks --------------------------------------------------- 0.46s 2026-02-05 02:17:01.006709 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-02-05 02:17:01.006718 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-02-05 02:17:01.006726 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.19s 2026-02-05 02:17:03.277339 | orchestrator | 2026-02-05 02:17:03 | INFO  | Task 4ee4f452-d720-47b6-aa33-15751c4aa668 (mariadb) was prepared for execution. 2026-02-05 02:17:03.277443 | orchestrator | 2026-02-05 02:17:03 | INFO  | It takes a moment until task 4ee4f452-d720-47b6-aa33-15751c4aa668 (mariadb) has been started and output is visible here. 2026-02-05 02:17:16.641621 | orchestrator | 2026-02-05 02:17:16.641779 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 02:17:16.641813 | orchestrator | 2026-02-05 02:17:16.641832 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 02:17:16.641850 | orchestrator | Thursday 05 February 2026 02:17:07 +0000 (0:00:00.166) 0:00:00.166 ***** 2026-02-05 02:17:16.641878 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:17:16.641902 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:17:16.641920 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:17:16.641939 | orchestrator | 2026-02-05 02:17:16.641978 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 02:17:16.642012 | orchestrator | Thursday 05 February 2026 02:17:07 +0000 (0:00:00.299) 0:00:00.466 ***** 2026-02-05 02:17:16.642115 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-05 02:17:16.642140 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-05 02:17:16.642158 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-05 02:17:16.642176 | orchestrator | 2026-02-05 02:17:16.642284 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-05 02:17:16.642303 | orchestrator | 2026-02-05 02:17:16.642321 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-05 02:17:16.642465 | orchestrator | Thursday 05 February 2026 02:17:08 +0000 (0:00:00.593) 0:00:01.060 ***** 2026-02-05 02:17:16.642490 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 02:17:16.642517 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 02:17:16.642541 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 02:17:16.642560 | orchestrator | 2026-02-05 02:17:16.642590 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 02:17:16.642611 | orchestrator | Thursday 05 February 2026 02:17:08 +0000 (0:00:00.360) 0:00:01.421 ***** 2026-02-05 02:17:16.642632 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:17:16.642652 | orchestrator | 2026-02-05 02:17:16.642673 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-05 02:17:16.642693 | orchestrator | Thursday 05 February 2026 02:17:09 +0000 (0:00:00.509) 0:00:01.930 ***** 2026-02-05 02:17:16.642742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 02:17:16.642808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 02:17:16.642864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 02:17:16.642886 | orchestrator | 2026-02-05 02:17:16.642906 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-05 02:17:16.642924 | orchestrator | Thursday 05 February 2026 02:17:11 +0000 (0:00:02.484) 0:00:04.415 ***** 2026-02-05 02:17:16.642943 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:17:16.642965 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:17:16.642985 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:17:16.643005 | orchestrator | 2026-02-05 02:17:16.643017 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-05 02:17:16.643028 | orchestrator | Thursday 05 February 2026 02:17:12 +0000 (0:00:00.591) 0:00:05.006 ***** 2026-02-05 02:17:16.643039 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:17:16.643050 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:17:16.643061 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:17:16.643071 | orchestrator | 2026-02-05 02:17:16.643082 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-05 02:17:16.643093 | orchestrator | Thursday 05 February 2026 02:17:13 +0000 (0:00:01.432) 0:00:06.439 ***** 2026-02-05 02:17:16.643118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 02:17:23.890770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 02:17:23.890879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 02:17:23.890917 | orchestrator | 2026-02-05 02:17:23.890932 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-05 02:17:23.890945 | orchestrator | Thursday 05 February 2026 02:17:16 +0000 (0:00:02.992) 0:00:09.431 ***** 2026-02-05 02:17:23.890956 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:17:23.890969 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:17:23.890980 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:17:23.890996 | orchestrator | 2026-02-05 02:17:23.891015 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-05 02:17:23.891053 | orchestrator | Thursday 05 February 2026 02:17:17 +0000 (0:00:01.063) 0:00:10.494 ***** 2026-02-05 02:17:23.891073 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:17:23.891093 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:17:23.891112 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:17:23.891131 | orchestrator | 2026-02-05 02:17:23.891150 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 02:17:23.891169 | orchestrator | Thursday 05 February 2026 02:17:21 +0000 (0:00:03.624) 0:00:14.119 ***** 2026-02-05 02:17:23.891212 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:17:23.891233 | orchestrator | 2026-02-05 02:17:23.891245 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-05 02:17:23.891256 | orchestrator | Thursday 05 February 2026 02:17:21 +0000 (0:00:00.491) 0:00:14.610 ***** 2026-02-05 02:17:23.891280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:17:23.891306 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:17:23.891330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:17:28.422943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:17:28.423081 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:17:28.423100 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:17:28.423111 | orchestrator | 2026-02-05 02:17:28.423122 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-05 02:17:28.423133 | orchestrator | Thursday 05 February 2026 02:17:23 +0000 (0:00:02.074) 0:00:16.685 ***** 2026-02-05 02:17:28.423145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:17:28.423157 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:17:28.423262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:17:28.423285 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:17:28.423297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:17:28.423308 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:17:28.423317 | orchestrator | 2026-02-05 02:17:28.423327 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-05 02:17:28.423337 | orchestrator | Thursday 05 February 2026 02:17:26 +0000 (0:00:02.150) 0:00:18.835 ***** 2026-02-05 02:17:28.423362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:17:31.123121 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:17:31.123224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:17:31.123234 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:17:31.123251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 02:17:31.123270 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:17:31.123274 | orchestrator | 2026-02-05 02:17:31.123279 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-05 02:17:31.123285 | orchestrator | Thursday 05 February 2026 02:17:28 +0000 (0:00:02.378) 0:00:21.214 ***** 2026-02-05 02:17:31.123299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 02:17:31.123305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 02:17:31.123318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 02:19:38.043463 | orchestrator | 2026-02-05 02:19:38.043571 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-05 02:19:38.043588 | orchestrator | Thursday 05 February 2026 02:17:31 +0000 (0:00:02.699) 0:00:23.913 ***** 2026-02-05 02:19:38.043601 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:19:38.043613 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:19:38.043624 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:19:38.043634 | orchestrator | 2026-02-05 02:19:38.043644 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-05 02:19:38.043655 | orchestrator | Thursday 05 February 2026 02:17:31 +0000 (0:00:00.876) 0:00:24.790 ***** 2026-02-05 02:19:38.043665 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:19:38.043675 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:19:38.043686 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:19:38.043695 | orchestrator | 2026-02-05 02:19:38.043706 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-05 02:19:38.043715 | orchestrator | Thursday 05 February 2026 02:17:32 +0000 (0:00:00.478) 0:00:25.269 ***** 2026-02-05 02:19:38.043725 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:19:38.043736 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:19:38.043746 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:19:38.043756 | orchestrator | 2026-02-05 02:19:38.043765 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-05 02:19:38.043775 | orchestrator | Thursday 05 February 2026 02:17:32 +0000 (0:00:00.308) 0:00:25.577 ***** 2026-02-05 02:19:38.043786 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-05 02:19:38.043798 | orchestrator | ...ignoring 2026-02-05 02:19:38.043809 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-05 02:19:38.043819 | orchestrator | ...ignoring 2026-02-05 02:19:38.043829 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-05 02:19:38.043839 | orchestrator | ...ignoring 2026-02-05 02:19:38.043872 | orchestrator | 2026-02-05 02:19:38.043882 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-05 02:19:38.043892 | orchestrator | Thursday 05 February 2026 02:17:43 +0000 (0:00:10.888) 0:00:36.466 ***** 2026-02-05 02:19:38.043901 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:19:38.043912 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:19:38.043919 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:19:38.043925 | orchestrator | 2026-02-05 02:19:38.043931 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-05 02:19:38.043937 | orchestrator | Thursday 05 February 2026 02:17:44 +0000 (0:00:00.455) 0:00:36.921 ***** 2026-02-05 02:19:38.043942 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:19:38.043948 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:19:38.043954 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:19:38.043961 | orchestrator | 2026-02-05 02:19:38.043968 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-05 02:19:38.043975 | orchestrator | Thursday 05 February 2026 02:17:44 +0000 (0:00:00.604) 0:00:37.526 ***** 2026-02-05 02:19:38.043982 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:19:38.043989 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:19:38.043996 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:19:38.044003 | orchestrator | 2026-02-05 02:19:38.044023 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-05 02:19:38.044031 | orchestrator | Thursday 05 February 2026 02:17:45 +0000 (0:00:00.411) 0:00:37.938 ***** 2026-02-05 02:19:38.044038 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:19:38.044045 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:19:38.044052 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:19:38.044059 | orchestrator | 2026-02-05 02:19:38.044066 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-05 02:19:38.044073 | orchestrator | Thursday 05 February 2026 02:17:45 +0000 (0:00:00.427) 0:00:38.366 ***** 2026-02-05 02:19:38.044080 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:19:38.044087 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:19:38.044094 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:19:38.044101 | orchestrator | 2026-02-05 02:19:38.044108 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-05 02:19:38.044116 | orchestrator | Thursday 05 February 2026 02:17:45 +0000 (0:00:00.394) 0:00:38.760 ***** 2026-02-05 02:19:38.044123 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:19:38.044130 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:19:38.044137 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:19:38.044144 | orchestrator | 2026-02-05 02:19:38.044200 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 02:19:38.044210 | orchestrator | Thursday 05 February 2026 02:17:46 +0000 (0:00:00.399) 0:00:39.159 ***** 2026-02-05 02:19:38.044217 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:19:38.044224 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:19:38.044231 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-05 02:19:38.044238 | orchestrator | 2026-02-05 02:19:38.044246 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-05 02:19:38.044253 | orchestrator | Thursday 05 February 2026 02:17:46 +0000 (0:00:00.519) 0:00:39.679 ***** 2026-02-05 02:19:38.044260 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:19:38.044267 | orchestrator | 2026-02-05 02:19:38.044274 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-05 02:19:38.044280 | orchestrator | Thursday 05 February 2026 02:17:56 +0000 (0:00:09.979) 0:00:49.658 ***** 2026-02-05 02:19:38.044287 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:19:38.044294 | orchestrator | 2026-02-05 02:19:38.044301 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 02:19:38.044309 | orchestrator | Thursday 05 February 2026 02:17:57 +0000 (0:00:00.149) 0:00:49.808 ***** 2026-02-05 02:19:38.044316 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:19:38.044344 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:19:38.044352 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:19:38.044359 | orchestrator | 2026-02-05 02:19:38.044366 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-05 02:19:38.044374 | orchestrator | Thursday 05 February 2026 02:17:57 +0000 (0:00:00.932) 0:00:50.740 ***** 2026-02-05 02:19:38.044381 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:19:38.044386 | orchestrator | 2026-02-05 02:19:38.044392 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-05 02:19:38.044398 | orchestrator | Thursday 05 February 2026 02:18:05 +0000 (0:00:07.344) 0:00:58.085 ***** 2026-02-05 02:19:38.044404 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:19:38.044410 | orchestrator | 2026-02-05 02:19:38.044416 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-05 02:19:38.044422 | orchestrator | Thursday 05 February 2026 02:18:06 +0000 (0:00:01.640) 0:00:59.725 ***** 2026-02-05 02:19:38.044427 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:19:38.044433 | orchestrator | 2026-02-05 02:19:38.044439 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-05 02:19:38.044445 | orchestrator | Thursday 05 February 2026 02:18:09 +0000 (0:00:02.291) 0:01:02.016 ***** 2026-02-05 02:19:38.044451 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:19:38.044457 | orchestrator | 2026-02-05 02:19:38.044463 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-05 02:19:38.044468 | orchestrator | Thursday 05 February 2026 02:18:09 +0000 (0:00:00.128) 0:01:02.145 ***** 2026-02-05 02:19:38.044474 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:19:38.044480 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:19:38.044486 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:19:38.044492 | orchestrator | 2026-02-05 02:19:38.044498 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-05 02:19:38.044504 | orchestrator | Thursday 05 February 2026 02:18:09 +0000 (0:00:00.307) 0:01:02.452 ***** 2026-02-05 02:19:38.044509 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:19:38.044515 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-05 02:19:38.044521 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:19:38.044527 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:19:38.044533 | orchestrator | 2026-02-05 02:19:38.044539 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-05 02:19:38.044545 | orchestrator | skipping: no hosts matched 2026-02-05 02:19:38.044551 | orchestrator | 2026-02-05 02:19:38.044556 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-05 02:19:38.044562 | orchestrator | 2026-02-05 02:19:38.044568 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-05 02:19:38.044574 | orchestrator | Thursday 05 February 2026 02:18:09 +0000 (0:00:00.317) 0:01:02.770 ***** 2026-02-05 02:19:38.044580 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:19:38.044586 | orchestrator | 2026-02-05 02:19:38.044591 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-05 02:19:38.044597 | orchestrator | Thursday 05 February 2026 02:18:26 +0000 (0:00:16.536) 0:01:19.306 ***** 2026-02-05 02:19:38.044603 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:19:38.044609 | orchestrator | 2026-02-05 02:19:38.044615 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-05 02:19:38.044621 | orchestrator | Thursday 05 February 2026 02:18:42 +0000 (0:00:15.821) 0:01:35.128 ***** 2026-02-05 02:19:38.044626 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:19:38.044632 | orchestrator | 2026-02-05 02:19:38.044641 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-05 02:19:38.044647 | orchestrator | 2026-02-05 02:19:38.044658 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-05 02:19:38.044664 | orchestrator | Thursday 05 February 2026 02:18:44 +0000 (0:00:02.172) 0:01:37.300 ***** 2026-02-05 02:19:38.044675 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:19:38.044681 | orchestrator | 2026-02-05 02:19:38.044687 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-05 02:19:38.044693 | orchestrator | Thursday 05 February 2026 02:19:00 +0000 (0:00:15.528) 0:01:52.828 ***** 2026-02-05 02:19:38.044698 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:19:38.044704 | orchestrator | 2026-02-05 02:19:38.044710 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-05 02:19:38.044731 | orchestrator | Thursday 05 February 2026 02:19:16 +0000 (0:00:16.570) 0:02:09.399 ***** 2026-02-05 02:19:38.044737 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:19:38.044751 | orchestrator | 2026-02-05 02:19:38.044757 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-05 02:19:38.044763 | orchestrator | 2026-02-05 02:19:38.044768 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-05 02:19:38.044774 | orchestrator | Thursday 05 February 2026 02:19:18 +0000 (0:00:02.193) 0:02:11.593 ***** 2026-02-05 02:19:38.044780 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:19:38.044786 | orchestrator | 2026-02-05 02:19:38.044792 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-05 02:19:38.044797 | orchestrator | Thursday 05 February 2026 02:19:30 +0000 (0:00:11.489) 0:02:23.082 ***** 2026-02-05 02:19:38.044803 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:19:38.044809 | orchestrator | 2026-02-05 02:19:38.044815 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-05 02:19:38.044821 | orchestrator | Thursday 05 February 2026 02:19:34 +0000 (0:00:04.625) 0:02:27.708 ***** 2026-02-05 02:19:38.044826 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:19:38.044832 | orchestrator | 2026-02-05 02:19:38.044838 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-05 02:19:38.044844 | orchestrator | 2026-02-05 02:19:38.044850 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-05 02:19:38.044856 | orchestrator | Thursday 05 February 2026 02:19:37 +0000 (0:00:02.464) 0:02:30.173 ***** 2026-02-05 02:19:38.044862 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:19:38.044867 | orchestrator | 2026-02-05 02:19:38.044873 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-05 02:19:38.044883 | orchestrator | Thursday 05 February 2026 02:19:38 +0000 (0:00:00.656) 0:02:30.829 ***** 2026-02-05 02:19:51.551940 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:19:51.552018 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:19:51.552025 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:19:51.552030 | orchestrator | 2026-02-05 02:19:51.552035 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-05 02:19:51.552040 | orchestrator | Thursday 05 February 2026 02:19:40 +0000 (0:00:02.689) 0:02:33.519 ***** 2026-02-05 02:19:51.552044 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:19:51.552049 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:19:51.552052 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:19:51.552056 | orchestrator | 2026-02-05 02:19:51.552060 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-05 02:19:51.552064 | orchestrator | Thursday 05 February 2026 02:19:43 +0000 (0:00:02.393) 0:02:35.912 ***** 2026-02-05 02:19:51.552068 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:19:51.552072 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:19:51.552076 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:19:51.552080 | orchestrator | 2026-02-05 02:19:51.552084 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-05 02:19:51.552087 | orchestrator | Thursday 05 February 2026 02:19:45 +0000 (0:00:02.405) 0:02:38.317 ***** 2026-02-05 02:19:51.552091 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:19:51.552095 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:19:51.552099 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:19:51.552103 | orchestrator | 2026-02-05 02:19:51.552125 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-05 02:19:51.552129 | orchestrator | Thursday 05 February 2026 02:19:48 +0000 (0:00:02.590) 0:02:40.908 ***** 2026-02-05 02:19:51.552133 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:19:51.552138 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:19:51.552141 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:19:51.552145 | orchestrator | 2026-02-05 02:19:51.552198 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-05 02:19:51.552206 | orchestrator | Thursday 05 February 2026 02:19:50 +0000 (0:00:02.832) 0:02:43.741 ***** 2026-02-05 02:19:51.552212 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:19:51.552217 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:19:51.552225 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:19:51.552229 | orchestrator | 2026-02-05 02:19:51.552239 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:19:51.552244 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-05 02:19:51.552250 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-05 02:19:51.552254 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-05 02:19:51.552258 | orchestrator | 2026-02-05 02:19:51.552262 | orchestrator | 2026-02-05 02:19:51.552271 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:19:51.552275 | orchestrator | Thursday 05 February 2026 02:19:51 +0000 (0:00:00.196) 0:02:43.937 ***** 2026-02-05 02:19:51.552279 | orchestrator | =============================================================================== 2026-02-05 02:19:51.552295 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.39s 2026-02-05 02:19:51.552299 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 32.07s 2026-02-05 02:19:51.552303 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.49s 2026-02-05 02:19:51.552306 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.89s 2026-02-05 02:19:51.552310 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.98s 2026-02-05 02:19:51.552314 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.34s 2026-02-05 02:19:51.552318 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.63s 2026-02-05 02:19:51.552322 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.37s 2026-02-05 02:19:51.552325 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.62s 2026-02-05 02:19:51.552329 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.99s 2026-02-05 02:19:51.552333 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.83s 2026-02-05 02:19:51.552337 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.70s 2026-02-05 02:19:51.552340 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.69s 2026-02-05 02:19:51.552344 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.59s 2026-02-05 02:19:51.552348 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.48s 2026-02-05 02:19:51.552352 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.46s 2026-02-05 02:19:51.552355 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.41s 2026-02-05 02:19:51.552359 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.39s 2026-02-05 02:19:51.552363 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.38s 2026-02-05 02:19:51.552367 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.29s 2026-02-05 02:19:53.839688 | orchestrator | 2026-02-05 02:19:53 | INFO  | Task 8a1500bf-eb75-48ec-b4ae-119f07e84e8b (rabbitmq) was prepared for execution. 2026-02-05 02:19:53.839802 | orchestrator | 2026-02-05 02:19:53 | INFO  | It takes a moment until task 8a1500bf-eb75-48ec-b4ae-119f07e84e8b (rabbitmq) has been started and output is visible here. 2026-02-05 02:20:06.763347 | orchestrator | 2026-02-05 02:20:06.763446 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 02:20:06.763458 | orchestrator | 2026-02-05 02:20:06.763464 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 02:20:06.763471 | orchestrator | Thursday 05 February 2026 02:19:57 +0000 (0:00:00.166) 0:00:00.166 ***** 2026-02-05 02:20:06.763477 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:20:06.763485 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:20:06.763492 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:20:06.763496 | orchestrator | 2026-02-05 02:20:06.763500 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 02:20:06.763505 | orchestrator | Thursday 05 February 2026 02:19:58 +0000 (0:00:00.309) 0:00:00.475 ***** 2026-02-05 02:20:06.763509 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-05 02:20:06.763513 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-05 02:20:06.763518 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-05 02:20:06.763521 | orchestrator | 2026-02-05 02:20:06.763525 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-05 02:20:06.763530 | orchestrator | 2026-02-05 02:20:06.763534 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-05 02:20:06.763538 | orchestrator | Thursday 05 February 2026 02:19:58 +0000 (0:00:00.543) 0:00:01.018 ***** 2026-02-05 02:20:06.763542 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:20:06.763547 | orchestrator | 2026-02-05 02:20:06.763551 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-05 02:20:06.763555 | orchestrator | Thursday 05 February 2026 02:19:59 +0000 (0:00:00.527) 0:00:01.546 ***** 2026-02-05 02:20:06.763558 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:20:06.763562 | orchestrator | 2026-02-05 02:20:06.763566 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-05 02:20:06.763570 | orchestrator | Thursday 05 February 2026 02:20:00 +0000 (0:00:01.013) 0:00:02.559 ***** 2026-02-05 02:20:06.763574 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:20:06.763579 | orchestrator | 2026-02-05 02:20:06.763583 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-05 02:20:06.763587 | orchestrator | Thursday 05 February 2026 02:20:00 +0000 (0:00:00.377) 0:00:02.937 ***** 2026-02-05 02:20:06.763591 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:20:06.763595 | orchestrator | 2026-02-05 02:20:06.763598 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-05 02:20:06.763602 | orchestrator | Thursday 05 February 2026 02:20:00 +0000 (0:00:00.375) 0:00:03.312 ***** 2026-02-05 02:20:06.763606 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:20:06.763610 | orchestrator | 2026-02-05 02:20:06.763614 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-05 02:20:06.763617 | orchestrator | Thursday 05 February 2026 02:20:01 +0000 (0:00:00.373) 0:00:03.686 ***** 2026-02-05 02:20:06.763621 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:20:06.763625 | orchestrator | 2026-02-05 02:20:06.763629 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-05 02:20:06.763635 | orchestrator | Thursday 05 February 2026 02:20:01 +0000 (0:00:00.378) 0:00:04.065 ***** 2026-02-05 02:20:06.763655 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:20:06.763662 | orchestrator | 2026-02-05 02:20:06.763680 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-05 02:20:06.763684 | orchestrator | Thursday 05 February 2026 02:20:02 +0000 (0:00:00.838) 0:00:04.903 ***** 2026-02-05 02:20:06.763688 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:20:06.763692 | orchestrator | 2026-02-05 02:20:06.763696 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-05 02:20:06.763700 | orchestrator | Thursday 05 February 2026 02:20:03 +0000 (0:00:00.934) 0:00:05.838 ***** 2026-02-05 02:20:06.763703 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:20:06.763707 | orchestrator | 2026-02-05 02:20:06.763711 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-05 02:20:06.763715 | orchestrator | Thursday 05 February 2026 02:20:03 +0000 (0:00:00.338) 0:00:06.176 ***** 2026-02-05 02:20:06.763719 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:20:06.763722 | orchestrator | 2026-02-05 02:20:06.763726 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-05 02:20:06.763730 | orchestrator | Thursday 05 February 2026 02:20:04 +0000 (0:00:00.378) 0:00:06.555 ***** 2026-02-05 02:20:06.763750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 02:20:06.763757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 02:20:06.763762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 02:20:06.763770 | orchestrator | 2026-02-05 02:20:06.763777 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-05 02:20:06.763781 | orchestrator | Thursday 05 February 2026 02:20:05 +0000 (0:00:00.849) 0:00:07.405 ***** 2026-02-05 02:20:06.763786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 02:20:06.763795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 02:20:24.692964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 02:20:24.693075 | orchestrator | 2026-02-05 02:20:24.693088 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-05 02:20:24.693096 | orchestrator | Thursday 05 February 2026 02:20:06 +0000 (0:00:01.669) 0:00:09.075 ***** 2026-02-05 02:20:24.693131 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-05 02:20:24.693141 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-05 02:20:24.693197 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-05 02:20:24.693205 | orchestrator | 2026-02-05 02:20:24.693213 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-05 02:20:24.693219 | orchestrator | Thursday 05 February 2026 02:20:08 +0000 (0:00:01.508) 0:00:10.584 ***** 2026-02-05 02:20:24.693227 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-05 02:20:24.693250 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-05 02:20:24.693257 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-05 02:20:24.693263 | orchestrator | 2026-02-05 02:20:24.693269 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-05 02:20:24.693275 | orchestrator | Thursday 05 February 2026 02:20:09 +0000 (0:00:01.741) 0:00:12.325 ***** 2026-02-05 02:20:24.693281 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-05 02:20:24.693287 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-05 02:20:24.693293 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-05 02:20:24.693300 | orchestrator | 2026-02-05 02:20:24.693306 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-05 02:20:24.693312 | orchestrator | Thursday 05 February 2026 02:20:11 +0000 (0:00:01.307) 0:00:13.633 ***** 2026-02-05 02:20:24.693318 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-05 02:20:24.693326 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-05 02:20:24.693332 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-05 02:20:24.693340 | orchestrator | 2026-02-05 02:20:24.693347 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-05 02:20:24.693354 | orchestrator | Thursday 05 February 2026 02:20:13 +0000 (0:00:01.887) 0:00:15.520 ***** 2026-02-05 02:20:24.693361 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-05 02:20:24.693368 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-05 02:20:24.693375 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-05 02:20:24.693383 | orchestrator | 2026-02-05 02:20:24.693390 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-05 02:20:24.693398 | orchestrator | Thursday 05 February 2026 02:20:14 +0000 (0:00:01.448) 0:00:16.968 ***** 2026-02-05 02:20:24.693404 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-05 02:20:24.693411 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-05 02:20:24.693418 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-05 02:20:24.693424 | orchestrator | 2026-02-05 02:20:24.693430 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-05 02:20:24.693437 | orchestrator | Thursday 05 February 2026 02:20:16 +0000 (0:00:01.438) 0:00:18.406 ***** 2026-02-05 02:20:24.693443 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:20:24.693451 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:20:24.693473 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:20:24.693489 | orchestrator | 2026-02-05 02:20:24.693496 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-05 02:20:24.693502 | orchestrator | Thursday 05 February 2026 02:20:16 +0000 (0:00:00.437) 0:00:18.844 ***** 2026-02-05 02:20:24.693511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 02:20:24.693524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 02:20:24.693534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 02:20:24.693542 | orchestrator | 2026-02-05 02:20:24.693549 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-05 02:20:24.693557 | orchestrator | Thursday 05 February 2026 02:20:17 +0000 (0:00:01.133) 0:00:19.977 ***** 2026-02-05 02:20:24.693564 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:20:24.693572 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:20:24.693579 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:20:24.693586 | orchestrator | 2026-02-05 02:20:24.693593 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-05 02:20:24.693606 | orchestrator | Thursday 05 February 2026 02:20:18 +0000 (0:00:01.168) 0:00:21.146 ***** 2026-02-05 02:20:24.693613 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:20:24.693621 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:20:24.693629 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:20:24.693636 | orchestrator | 2026-02-05 02:20:24.693644 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-05 02:20:24.693657 | orchestrator | Thursday 05 February 2026 02:20:24 +0000 (0:00:05.857) 0:00:27.004 ***** 2026-02-05 02:22:04.921289 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:22:04.921369 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:22:04.921377 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:22:04.921382 | orchestrator | 2026-02-05 02:22:04.921389 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-05 02:22:04.921395 | orchestrator | 2026-02-05 02:22:04.921400 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-05 02:22:04.921405 | orchestrator | Thursday 05 February 2026 02:20:25 +0000 (0:00:00.479) 0:00:27.483 ***** 2026-02-05 02:22:04.921410 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:22:04.921416 | orchestrator | 2026-02-05 02:22:04.921421 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-05 02:22:04.921426 | orchestrator | Thursday 05 February 2026 02:20:25 +0000 (0:00:00.649) 0:00:28.132 ***** 2026-02-05 02:22:04.921431 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:22:04.921436 | orchestrator | 2026-02-05 02:22:04.921440 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-05 02:22:04.921445 | orchestrator | Thursday 05 February 2026 02:20:26 +0000 (0:00:00.226) 0:00:28.359 ***** 2026-02-05 02:22:04.921450 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:22:04.921455 | orchestrator | 2026-02-05 02:22:04.921460 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-05 02:22:04.921465 | orchestrator | Thursday 05 February 2026 02:20:32 +0000 (0:00:06.655) 0:00:35.015 ***** 2026-02-05 02:22:04.921470 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:22:04.921475 | orchestrator | 2026-02-05 02:22:04.921480 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-05 02:22:04.921485 | orchestrator | 2026-02-05 02:22:04.921490 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-05 02:22:04.921494 | orchestrator | Thursday 05 February 2026 02:21:23 +0000 (0:00:51.101) 0:01:26.116 ***** 2026-02-05 02:22:04.921499 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:22:04.921504 | orchestrator | 2026-02-05 02:22:04.921509 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-05 02:22:04.921514 | orchestrator | Thursday 05 February 2026 02:21:24 +0000 (0:00:00.701) 0:01:26.817 ***** 2026-02-05 02:22:04.921519 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:22:04.921524 | orchestrator | 2026-02-05 02:22:04.921529 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-05 02:22:04.921534 | orchestrator | Thursday 05 February 2026 02:21:24 +0000 (0:00:00.232) 0:01:27.050 ***** 2026-02-05 02:22:04.921538 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:22:04.921543 | orchestrator | 2026-02-05 02:22:04.921548 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-05 02:22:04.921565 | orchestrator | Thursday 05 February 2026 02:21:26 +0000 (0:00:01.708) 0:01:28.758 ***** 2026-02-05 02:22:04.921571 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:22:04.921575 | orchestrator | 2026-02-05 02:22:04.921580 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-05 02:22:04.921585 | orchestrator | 2026-02-05 02:22:04.921590 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-05 02:22:04.921595 | orchestrator | Thursday 05 February 2026 02:21:42 +0000 (0:00:16.429) 0:01:45.188 ***** 2026-02-05 02:22:04.921600 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:22:04.921605 | orchestrator | 2026-02-05 02:22:04.921610 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-05 02:22:04.921630 | orchestrator | Thursday 05 February 2026 02:21:43 +0000 (0:00:00.683) 0:01:45.872 ***** 2026-02-05 02:22:04.921635 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:22:04.921640 | orchestrator | 2026-02-05 02:22:04.921645 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-05 02:22:04.921649 | orchestrator | Thursday 05 February 2026 02:21:43 +0000 (0:00:00.235) 0:01:46.108 ***** 2026-02-05 02:22:04.921654 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:22:04.921661 | orchestrator | 2026-02-05 02:22:04.921669 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-05 02:22:04.921680 | orchestrator | Thursday 05 February 2026 02:21:45 +0000 (0:00:01.899) 0:01:48.007 ***** 2026-02-05 02:22:04.921691 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:22:04.921698 | orchestrator | 2026-02-05 02:22:04.921706 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-05 02:22:04.921714 | orchestrator | 2026-02-05 02:22:04.921721 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-05 02:22:04.921728 | orchestrator | Thursday 05 February 2026 02:22:01 +0000 (0:00:15.814) 0:02:03.822 ***** 2026-02-05 02:22:04.921735 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:22:04.921743 | orchestrator | 2026-02-05 02:22:04.921751 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-05 02:22:04.921759 | orchestrator | Thursday 05 February 2026 02:22:01 +0000 (0:00:00.485) 0:02:04.307 ***** 2026-02-05 02:22:04.921766 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-05 02:22:04.921774 | orchestrator | enable_outward_rabbitmq_True 2026-02-05 02:22:04.921782 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-05 02:22:04.921790 | orchestrator | outward_rabbitmq_restart 2026-02-05 02:22:04.921799 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:22:04.921807 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:22:04.921814 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:22:04.921819 | orchestrator | 2026-02-05 02:22:04.921824 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-05 02:22:04.921829 | orchestrator | skipping: no hosts matched 2026-02-05 02:22:04.921834 | orchestrator | 2026-02-05 02:22:04.921839 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-05 02:22:04.921843 | orchestrator | skipping: no hosts matched 2026-02-05 02:22:04.921848 | orchestrator | 2026-02-05 02:22:04.921853 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-05 02:22:04.921860 | orchestrator | skipping: no hosts matched 2026-02-05 02:22:04.921865 | orchestrator | 2026-02-05 02:22:04.921871 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:22:04.921889 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-05 02:22:04.921897 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:22:04.921902 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:22:04.921908 | orchestrator | 2026-02-05 02:22:04.921914 | orchestrator | 2026-02-05 02:22:04.921920 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:22:04.921925 | orchestrator | Thursday 05 February 2026 02:22:04 +0000 (0:00:02.629) 0:02:06.937 ***** 2026-02-05 02:22:04.921931 | orchestrator | =============================================================================== 2026-02-05 02:22:04.921937 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.35s 2026-02-05 02:22:04.921943 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.26s 2026-02-05 02:22:04.921955 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.86s 2026-02-05 02:22:04.921961 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.63s 2026-02-05 02:22:04.921966 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.03s 2026-02-05 02:22:04.921972 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.89s 2026-02-05 02:22:04.921978 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.74s 2026-02-05 02:22:04.921984 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.67s 2026-02-05 02:22:04.921990 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.51s 2026-02-05 02:22:04.921995 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.45s 2026-02-05 02:22:04.922001 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.44s 2026-02-05 02:22:04.922007 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.31s 2026-02-05 02:22:04.922053 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.17s 2026-02-05 02:22:04.922059 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.13s 2026-02-05 02:22:04.922069 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.01s 2026-02-05 02:22:04.922074 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.93s 2026-02-05 02:22:04.922079 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.85s 2026-02-05 02:22:04.922084 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.84s 2026-02-05 02:22:04.922089 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.69s 2026-02-05 02:22:04.922093 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2026-02-05 02:22:07.165179 | orchestrator | 2026-02-05 02:22:07 | INFO  | Task 4bfbd832-bb93-4371-bfed-fc31b27982eb (openvswitch) was prepared for execution. 2026-02-05 02:22:07.165341 | orchestrator | 2026-02-05 02:22:07 | INFO  | It takes a moment until task 4bfbd832-bb93-4371-bfed-fc31b27982eb (openvswitch) has been started and output is visible here. 2026-02-05 02:22:19.493022 | orchestrator | 2026-02-05 02:22:19.493098 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 02:22:19.493104 | orchestrator | 2026-02-05 02:22:19.493109 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 02:22:19.493113 | orchestrator | Thursday 05 February 2026 02:22:11 +0000 (0:00:00.251) 0:00:00.251 ***** 2026-02-05 02:22:19.493118 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:22:19.493122 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:22:19.493126 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:22:19.493179 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:22:19.493185 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:22:19.493189 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:22:19.493193 | orchestrator | 2026-02-05 02:22:19.493197 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 02:22:19.493201 | orchestrator | Thursday 05 February 2026 02:22:11 +0000 (0:00:00.673) 0:00:00.925 ***** 2026-02-05 02:22:19.493206 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 02:22:19.493210 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 02:22:19.493214 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 02:22:19.493218 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 02:22:19.493223 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 02:22:19.493229 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 02:22:19.493235 | orchestrator | 2026-02-05 02:22:19.493263 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-05 02:22:19.493269 | orchestrator | 2026-02-05 02:22:19.493276 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-05 02:22:19.493282 | orchestrator | Thursday 05 February 2026 02:22:12 +0000 (0:00:00.600) 0:00:01.525 ***** 2026-02-05 02:22:19.493290 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:22:19.493298 | orchestrator | 2026-02-05 02:22:19.493305 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-05 02:22:19.493311 | orchestrator | Thursday 05 February 2026 02:22:13 +0000 (0:00:01.071) 0:00:02.597 ***** 2026-02-05 02:22:19.493317 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-05 02:22:19.493324 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-05 02:22:19.493331 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-05 02:22:19.493337 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-05 02:22:19.493343 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-05 02:22:19.493347 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-05 02:22:19.493350 | orchestrator | 2026-02-05 02:22:19.493354 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-05 02:22:19.493358 | orchestrator | Thursday 05 February 2026 02:22:14 +0000 (0:00:01.159) 0:00:03.757 ***** 2026-02-05 02:22:19.493362 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-05 02:22:19.493366 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-05 02:22:19.493370 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-05 02:22:19.493374 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-05 02:22:19.493378 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-05 02:22:19.493382 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-05 02:22:19.493385 | orchestrator | 2026-02-05 02:22:19.493389 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-05 02:22:19.493393 | orchestrator | Thursday 05 February 2026 02:22:16 +0000 (0:00:01.419) 0:00:05.176 ***** 2026-02-05 02:22:19.493397 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-05 02:22:19.493401 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:22:19.493406 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-05 02:22:19.493410 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:22:19.493413 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-05 02:22:19.493417 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:22:19.493421 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-05 02:22:19.493425 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:22:19.493428 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-05 02:22:19.493432 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:22:19.493436 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-05 02:22:19.493440 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:22:19.493443 | orchestrator | 2026-02-05 02:22:19.493447 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-05 02:22:19.493451 | orchestrator | Thursday 05 February 2026 02:22:17 +0000 (0:00:01.102) 0:00:06.279 ***** 2026-02-05 02:22:19.493455 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:22:19.493459 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:22:19.493463 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:22:19.493467 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:22:19.493470 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:22:19.493474 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:22:19.493478 | orchestrator | 2026-02-05 02:22:19.493482 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-05 02:22:19.493493 | orchestrator | Thursday 05 February 2026 02:22:17 +0000 (0:00:00.593) 0:00:06.873 ***** 2026-02-05 02:22:19.493512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:19.493520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:19.493525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:19.493554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:19.493562 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:19.493569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:21.927570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:21.927738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:21.927753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:21.927760 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:21.927783 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:21.927827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:21.927835 | orchestrator | 2026-02-05 02:22:21.927844 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-05 02:22:21.927851 | orchestrator | Thursday 05 February 2026 02:22:19 +0000 (0:00:01.704) 0:00:08.577 ***** 2026-02-05 02:22:21.927858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:21.927865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:21.927872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:21.927878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:21.927893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:21.927904 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:24.669722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:24.669815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:24.669829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:24.669852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:24.669874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:24.669891 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:24.669897 | orchestrator | 2026-02-05 02:22:24.669903 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-05 02:22:24.669909 | orchestrator | Thursday 05 February 2026 02:22:22 +0000 (0:00:02.443) 0:00:11.021 ***** 2026-02-05 02:22:24.669914 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:22:24.669920 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:22:24.669925 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:22:24.669929 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:22:24.669934 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:22:24.669938 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:22:24.669944 | orchestrator | 2026-02-05 02:22:24.669948 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-05 02:22:24.669953 | orchestrator | Thursday 05 February 2026 02:22:22 +0000 (0:00:00.896) 0:00:11.917 ***** 2026-02-05 02:22:24.669958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:24.669964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:24.669976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:24.669982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:24.669993 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:50.188924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 02:22:50.189021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:50.189029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:50.189073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:50.189079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:50.189094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:50.189098 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 02:22:50.189102 | orchestrator | 2026-02-05 02:22:50.189108 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 02:22:50.189114 | orchestrator | Thursday 05 February 2026 02:22:24 +0000 (0:00:01.834) 0:00:13.752 ***** 2026-02-05 02:22:50.189118 | orchestrator | 2026-02-05 02:22:50.189121 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 02:22:50.189125 | orchestrator | Thursday 05 February 2026 02:22:25 +0000 (0:00:00.308) 0:00:14.060 ***** 2026-02-05 02:22:50.189176 | orchestrator | 2026-02-05 02:22:50.189186 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 02:22:50.189190 | orchestrator | Thursday 05 February 2026 02:22:25 +0000 (0:00:00.128) 0:00:14.189 ***** 2026-02-05 02:22:50.189193 | orchestrator | 2026-02-05 02:22:50.189197 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 02:22:50.189201 | orchestrator | Thursday 05 February 2026 02:22:25 +0000 (0:00:00.133) 0:00:14.322 ***** 2026-02-05 02:22:50.189205 | orchestrator | 2026-02-05 02:22:50.189209 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 02:22:50.189212 | orchestrator | Thursday 05 February 2026 02:22:25 +0000 (0:00:00.128) 0:00:14.451 ***** 2026-02-05 02:22:50.189216 | orchestrator | 2026-02-05 02:22:50.189220 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 02:22:50.189224 | orchestrator | Thursday 05 February 2026 02:22:25 +0000 (0:00:00.140) 0:00:14.591 ***** 2026-02-05 02:22:50.189228 | orchestrator | 2026-02-05 02:22:50.189231 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-05 02:22:50.189235 | orchestrator | Thursday 05 February 2026 02:22:25 +0000 (0:00:00.134) 0:00:14.726 ***** 2026-02-05 02:22:50.189239 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:22:50.189244 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:22:50.189248 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:22:50.189252 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:22:50.189256 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:22:50.189260 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:22:50.189263 | orchestrator | 2026-02-05 02:22:50.189267 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-05 02:22:50.189272 | orchestrator | Thursday 05 February 2026 02:22:34 +0000 (0:00:08.637) 0:00:23.363 ***** 2026-02-05 02:22:50.189276 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:22:50.189285 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:22:50.189289 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:22:50.189292 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:22:50.189296 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:22:50.189300 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:22:50.189304 | orchestrator | 2026-02-05 02:22:50.189308 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-05 02:22:50.189312 | orchestrator | Thursday 05 February 2026 02:22:35 +0000 (0:00:01.041) 0:00:24.405 ***** 2026-02-05 02:22:50.189316 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:22:50.189320 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:22:50.189323 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:22:50.189327 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:22:50.189331 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:22:50.189335 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:22:50.189338 | orchestrator | 2026-02-05 02:22:50.189342 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-05 02:22:50.189346 | orchestrator | Thursday 05 February 2026 02:22:43 +0000 (0:00:07.662) 0:00:32.068 ***** 2026-02-05 02:22:50.189350 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-05 02:22:50.189354 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-05 02:22:50.189358 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-05 02:22:50.189362 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-05 02:22:50.189366 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-05 02:22:50.189369 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-05 02:22:50.189373 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-05 02:22:50.189384 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-05 02:23:03.187976 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-05 02:23:03.188109 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-05 02:23:03.188203 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-05 02:23:03.188225 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-05 02:23:03.188245 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 02:23:03.188263 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 02:23:03.188282 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 02:23:03.188299 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 02:23:03.188318 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 02:23:03.188336 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 02:23:03.188355 | orchestrator | 2026-02-05 02:23:03.188376 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-05 02:23:03.188396 | orchestrator | Thursday 05 February 2026 02:22:50 +0000 (0:00:07.121) 0:00:39.190 ***** 2026-02-05 02:23:03.188416 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-05 02:23:03.188435 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:23:03.188454 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-05 02:23:03.188473 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:23:03.188493 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-05 02:23:03.188512 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:23:03.188531 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-05 02:23:03.188550 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-05 02:23:03.188568 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-05 02:23:03.188585 | orchestrator | 2026-02-05 02:23:03.188604 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-05 02:23:03.188625 | orchestrator | Thursday 05 February 2026 02:22:52 +0000 (0:00:02.278) 0:00:41.468 ***** 2026-02-05 02:23:03.188647 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-05 02:23:03.188667 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:23:03.188687 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-05 02:23:03.188707 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:23:03.188725 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-05 02:23:03.188743 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:23:03.188761 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-05 02:23:03.188780 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-05 02:23:03.188824 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-05 02:23:03.188844 | orchestrator | 2026-02-05 02:23:03.188862 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-05 02:23:03.188881 | orchestrator | Thursday 05 February 2026 02:22:55 +0000 (0:00:03.231) 0:00:44.700 ***** 2026-02-05 02:23:03.188899 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:23:03.188921 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:23:03.188974 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:23:03.188996 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:23:03.189015 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:23:03.189030 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:23:03.189041 | orchestrator | 2026-02-05 02:23:03.189052 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:23:03.189065 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 02:23:03.189077 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 02:23:03.189088 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 02:23:03.189098 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 02:23:03.189109 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 02:23:03.189120 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 02:23:03.189213 | orchestrator | 2026-02-05 02:23:03.189228 | orchestrator | 2026-02-05 02:23:03.189247 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:23:03.189274 | orchestrator | Thursday 05 February 2026 02:23:02 +0000 (0:00:07.111) 0:00:51.811 ***** 2026-02-05 02:23:03.189322 | orchestrator | =============================================================================== 2026-02-05 02:23:03.189340 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.77s 2026-02-05 02:23:03.189358 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.64s 2026-02-05 02:23:03.189375 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.12s 2026-02-05 02:23:03.189392 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.23s 2026-02-05 02:23:03.189411 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.44s 2026-02-05 02:23:03.189429 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.28s 2026-02-05 02:23:03.189448 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.83s 2026-02-05 02:23:03.189466 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.70s 2026-02-05 02:23:03.189486 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.42s 2026-02-05 02:23:03.189503 | orchestrator | module-load : Load modules ---------------------------------------------- 1.16s 2026-02-05 02:23:03.189522 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.10s 2026-02-05 02:23:03.189540 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.07s 2026-02-05 02:23:03.189558 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.04s 2026-02-05 02:23:03.189576 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.97s 2026-02-05 02:23:03.189593 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.90s 2026-02-05 02:23:03.189609 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.67s 2026-02-05 02:23:03.189628 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-02-05 02:23:03.189646 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.59s 2026-02-05 02:23:05.476846 | orchestrator | 2026-02-05 02:23:05 | INFO  | Task 3f699874-1783-4133-868b-8a49d2a70aef (ovn) was prepared for execution. 2026-02-05 02:23:05.476922 | orchestrator | 2026-02-05 02:23:05 | INFO  | It takes a moment until task 3f699874-1783-4133-868b-8a49d2a70aef (ovn) has been started and output is visible here. 2026-02-05 02:23:16.006349 | orchestrator | 2026-02-05 02:23:16.006429 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 02:23:16.006436 | orchestrator | 2026-02-05 02:23:16.006441 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 02:23:16.006446 | orchestrator | Thursday 05 February 2026 02:23:09 +0000 (0:00:00.157) 0:00:00.157 ***** 2026-02-05 02:23:16.006450 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:23:16.006455 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:23:16.006459 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:23:16.006463 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:23:16.006467 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:23:16.006471 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:23:16.006474 | orchestrator | 2026-02-05 02:23:16.006478 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 02:23:16.006483 | orchestrator | Thursday 05 February 2026 02:23:10 +0000 (0:00:00.657) 0:00:00.815 ***** 2026-02-05 02:23:16.006506 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-05 02:23:16.006511 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-05 02:23:16.006515 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-05 02:23:16.006518 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-05 02:23:16.006522 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-05 02:23:16.006526 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-05 02:23:16.006530 | orchestrator | 2026-02-05 02:23:16.006534 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-05 02:23:16.006538 | orchestrator | 2026-02-05 02:23:16.006542 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-05 02:23:16.006546 | orchestrator | Thursday 05 February 2026 02:23:10 +0000 (0:00:00.814) 0:00:01.629 ***** 2026-02-05 02:23:16.006550 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:23:16.006555 | orchestrator | 2026-02-05 02:23:16.006559 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-05 02:23:16.006563 | orchestrator | Thursday 05 February 2026 02:23:11 +0000 (0:00:01.084) 0:00:02.713 ***** 2026-02-05 02:23:16.006568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006619 | orchestrator | 2026-02-05 02:23:16.006623 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-05 02:23:16.006626 | orchestrator | Thursday 05 February 2026 02:23:13 +0000 (0:00:01.299) 0:00:04.013 ***** 2026-02-05 02:23:16.006633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006637 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006661 | orchestrator | 2026-02-05 02:23:16.006664 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-05 02:23:16.006668 | orchestrator | Thursday 05 February 2026 02:23:14 +0000 (0:00:01.605) 0:00:05.619 ***** 2026-02-05 02:23:16.006672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:16.006684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.618524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.618665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.618690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.618711 | orchestrator | 2026-02-05 02:23:42.618732 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-05 02:23:42.618752 | orchestrator | Thursday 05 February 2026 02:23:15 +0000 (0:00:01.112) 0:00:06.731 ***** 2026-02-05 02:23:42.618771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.618790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.618840 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.618862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.618880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.618926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.618946 | orchestrator | 2026-02-05 02:23:42.618965 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-05 02:23:42.618980 | orchestrator | Thursday 05 February 2026 02:23:17 +0000 (0:00:01.613) 0:00:08.344 ***** 2026-02-05 02:23:42.619000 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.619012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.619029 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.619047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.619068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.619079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:23:42.619090 | orchestrator | 2026-02-05 02:23:42.619101 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-05 02:23:42.619112 | orchestrator | Thursday 05 February 2026 02:23:18 +0000 (0:00:01.173) 0:00:09.518 ***** 2026-02-05 02:23:42.619234 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:23:42.619252 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:23:42.619263 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:23:42.619274 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:23:42.619285 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:23:42.619296 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:23:42.619307 | orchestrator | 2026-02-05 02:23:42.619318 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-05 02:23:42.619329 | orchestrator | Thursday 05 February 2026 02:23:21 +0000 (0:00:02.877) 0:00:12.396 ***** 2026-02-05 02:23:42.619340 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-05 02:23:42.619351 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-05 02:23:42.619362 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-05 02:23:42.619373 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-05 02:23:42.619383 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-05 02:23:42.619394 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-05 02:23:42.619415 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 02:24:11.254288 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 02:24:11.254387 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 02:24:11.254414 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 02:24:11.254422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 02:24:11.254429 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 02:24:11.254436 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 02:24:11.254444 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 02:24:11.254471 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 02:24:11.254477 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 02:24:11.254485 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 02:24:11.254491 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 02:24:11.254498 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 02:24:11.254506 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 02:24:11.254512 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 02:24:11.254518 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 02:24:11.254526 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 02:24:11.254533 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 02:24:11.254539 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 02:24:11.254546 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 02:24:11.254552 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 02:24:11.254558 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 02:24:11.254565 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 02:24:11.254571 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 02:24:11.254577 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 02:24:11.254584 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 02:24:11.254591 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 02:24:11.254597 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 02:24:11.254604 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 02:24:11.254610 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 02:24:11.254617 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-05 02:24:11.254624 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-05 02:24:11.254645 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-05 02:24:11.254660 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-05 02:24:11.254668 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-05 02:24:11.254675 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-05 02:24:11.254684 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-05 02:24:11.254710 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-05 02:24:11.254714 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-05 02:24:11.254723 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-05 02:24:11.254728 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-05 02:24:11.254732 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-05 02:24:11.254736 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-05 02:24:11.254740 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-05 02:24:11.254744 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-05 02:24:11.254749 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-05 02:24:11.254753 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-05 02:24:11.254757 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-05 02:24:11.254761 | orchestrator | 2026-02-05 02:24:11.254766 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 02:24:11.254770 | orchestrator | Thursday 05 February 2026 02:23:42 +0000 (0:00:20.409) 0:00:32.805 ***** 2026-02-05 02:24:11.254774 | orchestrator | 2026-02-05 02:24:11.254778 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 02:24:11.254782 | orchestrator | Thursday 05 February 2026 02:23:42 +0000 (0:00:00.218) 0:00:33.024 ***** 2026-02-05 02:24:11.254786 | orchestrator | 2026-02-05 02:24:11.254791 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 02:24:11.254795 | orchestrator | Thursday 05 February 2026 02:23:42 +0000 (0:00:00.063) 0:00:33.087 ***** 2026-02-05 02:24:11.254799 | orchestrator | 2026-02-05 02:24:11.254803 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 02:24:11.254807 | orchestrator | Thursday 05 February 2026 02:23:42 +0000 (0:00:00.059) 0:00:33.147 ***** 2026-02-05 02:24:11.254811 | orchestrator | 2026-02-05 02:24:11.254815 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 02:24:11.254819 | orchestrator | Thursday 05 February 2026 02:23:42 +0000 (0:00:00.063) 0:00:33.210 ***** 2026-02-05 02:24:11.254823 | orchestrator | 2026-02-05 02:24:11.254827 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 02:24:11.254831 | orchestrator | Thursday 05 February 2026 02:23:42 +0000 (0:00:00.062) 0:00:33.273 ***** 2026-02-05 02:24:11.254835 | orchestrator | 2026-02-05 02:24:11.254840 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-05 02:24:11.254845 | orchestrator | Thursday 05 February 2026 02:23:42 +0000 (0:00:00.063) 0:00:33.337 ***** 2026-02-05 02:24:11.254850 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:24:11.254856 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:24:11.254861 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:24:11.254866 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:11.254871 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:11.254876 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:11.254881 | orchestrator | 2026-02-05 02:24:11.254885 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-05 02:24:11.254890 | orchestrator | Thursday 05 February 2026 02:23:44 +0000 (0:00:01.611) 0:00:34.948 ***** 2026-02-05 02:24:11.254899 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:24:11.254904 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:24:11.254909 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:24:11.254913 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:24:11.254918 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:24:11.254923 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:24:11.254928 | orchestrator | 2026-02-05 02:24:11.254933 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-05 02:24:11.254938 | orchestrator | 2026-02-05 02:24:11.254943 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-05 02:24:11.254947 | orchestrator | Thursday 05 February 2026 02:24:09 +0000 (0:00:25.269) 0:01:00.218 ***** 2026-02-05 02:24:11.254952 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:24:11.254957 | orchestrator | 2026-02-05 02:24:11.254962 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-05 02:24:11.254967 | orchestrator | Thursday 05 February 2026 02:24:10 +0000 (0:00:00.542) 0:01:00.760 ***** 2026-02-05 02:24:11.254972 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:24:11.254977 | orchestrator | 2026-02-05 02:24:11.254982 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-05 02:24:11.254986 | orchestrator | Thursday 05 February 2026 02:24:10 +0000 (0:00:00.484) 0:01:01.245 ***** 2026-02-05 02:24:11.254991 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:11.254996 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:11.255001 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:11.255006 | orchestrator | 2026-02-05 02:24:11.255011 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-05 02:24:11.255019 | orchestrator | Thursday 05 February 2026 02:24:11 +0000 (0:00:00.736) 0:01:01.981 ***** 2026-02-05 02:24:22.068540 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:22.068636 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:22.068655 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:22.068675 | orchestrator | 2026-02-05 02:24:22.068689 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-05 02:24:22.068720 | orchestrator | Thursday 05 February 2026 02:24:11 +0000 (0:00:00.426) 0:01:02.407 ***** 2026-02-05 02:24:22.068732 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:22.068743 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:22.068755 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:22.068768 | orchestrator | 2026-02-05 02:24:22.068780 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-05 02:24:22.068793 | orchestrator | Thursday 05 February 2026 02:24:11 +0000 (0:00:00.278) 0:01:02.686 ***** 2026-02-05 02:24:22.068805 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:22.068818 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:22.068830 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:22.068844 | orchestrator | 2026-02-05 02:24:22.068860 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-05 02:24:22.068870 | orchestrator | Thursday 05 February 2026 02:24:12 +0000 (0:00:00.262) 0:01:02.949 ***** 2026-02-05 02:24:22.068878 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:22.068885 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:22.068893 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:22.068900 | orchestrator | 2026-02-05 02:24:22.068908 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-05 02:24:22.068915 | orchestrator | Thursday 05 February 2026 02:24:12 +0000 (0:00:00.281) 0:01:03.230 ***** 2026-02-05 02:24:22.068922 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.068931 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.068939 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.068946 | orchestrator | 2026-02-05 02:24:22.068953 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-05 02:24:22.068981 | orchestrator | Thursday 05 February 2026 02:24:12 +0000 (0:00:00.424) 0:01:03.655 ***** 2026-02-05 02:24:22.068989 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.068996 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069004 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069011 | orchestrator | 2026-02-05 02:24:22.069018 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-05 02:24:22.069025 | orchestrator | Thursday 05 February 2026 02:24:13 +0000 (0:00:00.309) 0:01:03.964 ***** 2026-02-05 02:24:22.069033 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.069040 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069047 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069055 | orchestrator | 2026-02-05 02:24:22.069062 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-05 02:24:22.069069 | orchestrator | Thursday 05 February 2026 02:24:13 +0000 (0:00:00.295) 0:01:04.260 ***** 2026-02-05 02:24:22.069076 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.069084 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069091 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069098 | orchestrator | 2026-02-05 02:24:22.069105 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-05 02:24:22.069113 | orchestrator | Thursday 05 February 2026 02:24:13 +0000 (0:00:00.304) 0:01:04.565 ***** 2026-02-05 02:24:22.069152 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.069161 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069169 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069176 | orchestrator | 2026-02-05 02:24:22.069183 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-05 02:24:22.069191 | orchestrator | Thursday 05 February 2026 02:24:14 +0000 (0:00:00.556) 0:01:05.121 ***** 2026-02-05 02:24:22.069198 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.069205 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069212 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069219 | orchestrator | 2026-02-05 02:24:22.069227 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-05 02:24:22.069234 | orchestrator | Thursday 05 February 2026 02:24:14 +0000 (0:00:00.400) 0:01:05.521 ***** 2026-02-05 02:24:22.069241 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.069249 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069259 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069274 | orchestrator | 2026-02-05 02:24:22.069292 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-05 02:24:22.069304 | orchestrator | Thursday 05 February 2026 02:24:15 +0000 (0:00:00.304) 0:01:05.825 ***** 2026-02-05 02:24:22.069316 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.069327 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069338 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069349 | orchestrator | 2026-02-05 02:24:22.069360 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-05 02:24:22.069371 | orchestrator | Thursday 05 February 2026 02:24:15 +0000 (0:00:00.325) 0:01:06.151 ***** 2026-02-05 02:24:22.069383 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.069395 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069407 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069420 | orchestrator | 2026-02-05 02:24:22.069431 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-05 02:24:22.069444 | orchestrator | Thursday 05 February 2026 02:24:15 +0000 (0:00:00.286) 0:01:06.437 ***** 2026-02-05 02:24:22.069456 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.069466 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069474 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069482 | orchestrator | 2026-02-05 02:24:22.069492 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-05 02:24:22.069520 | orchestrator | Thursday 05 February 2026 02:24:16 +0000 (0:00:00.524) 0:01:06.962 ***** 2026-02-05 02:24:22.069535 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.069547 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069559 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069571 | orchestrator | 2026-02-05 02:24:22.069583 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-05 02:24:22.069595 | orchestrator | Thursday 05 February 2026 02:24:16 +0000 (0:00:00.298) 0:01:07.261 ***** 2026-02-05 02:24:22.069627 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.069639 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069651 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069663 | orchestrator | 2026-02-05 02:24:22.069675 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-05 02:24:22.069697 | orchestrator | Thursday 05 February 2026 02:24:16 +0000 (0:00:00.360) 0:01:07.621 ***** 2026-02-05 02:24:22.069712 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:24:22.069724 | orchestrator | 2026-02-05 02:24:22.069736 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-05 02:24:22.069750 | orchestrator | Thursday 05 February 2026 02:24:17 +0000 (0:00:00.758) 0:01:08.379 ***** 2026-02-05 02:24:22.069764 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:22.069777 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:22.069789 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:22.069801 | orchestrator | 2026-02-05 02:24:22.069812 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-05 02:24:22.069825 | orchestrator | Thursday 05 February 2026 02:24:18 +0000 (0:00:00.419) 0:01:08.799 ***** 2026-02-05 02:24:22.069837 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:22.069850 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:22.069863 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:22.069876 | orchestrator | 2026-02-05 02:24:22.069891 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-05 02:24:22.069899 | orchestrator | Thursday 05 February 2026 02:24:18 +0000 (0:00:00.473) 0:01:09.272 ***** 2026-02-05 02:24:22.069907 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.069914 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069922 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069929 | orchestrator | 2026-02-05 02:24:22.069936 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-05 02:24:22.069960 | orchestrator | Thursday 05 February 2026 02:24:18 +0000 (0:00:00.316) 0:01:09.588 ***** 2026-02-05 02:24:22.069967 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.069975 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.069991 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.069999 | orchestrator | 2026-02-05 02:24:22.070007 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-05 02:24:22.070058 | orchestrator | Thursday 05 February 2026 02:24:19 +0000 (0:00:00.440) 0:01:10.029 ***** 2026-02-05 02:24:22.070066 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.070073 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.070081 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.070088 | orchestrator | 2026-02-05 02:24:22.070095 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-05 02:24:22.070103 | orchestrator | Thursday 05 February 2026 02:24:19 +0000 (0:00:00.284) 0:01:10.313 ***** 2026-02-05 02:24:22.070110 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.070117 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.070148 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.070160 | orchestrator | 2026-02-05 02:24:22.070168 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-05 02:24:22.070176 | orchestrator | Thursday 05 February 2026 02:24:19 +0000 (0:00:00.321) 0:01:10.635 ***** 2026-02-05 02:24:22.070196 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.070203 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.070211 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.070218 | orchestrator | 2026-02-05 02:24:22.070225 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-05 02:24:22.070232 | orchestrator | Thursday 05 February 2026 02:24:20 +0000 (0:00:00.288) 0:01:10.923 ***** 2026-02-05 02:24:22.070239 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:22.070247 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:22.070254 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:22.070261 | orchestrator | 2026-02-05 02:24:22.070268 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-05 02:24:22.070275 | orchestrator | Thursday 05 February 2026 02:24:20 +0000 (0:00:00.305) 0:01:11.229 ***** 2026-02-05 02:24:22.070286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:22.070296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:22.070304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:22.070327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959352 | orchestrator | 2026-02-05 02:24:27.959358 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-05 02:24:27.959363 | orchestrator | Thursday 05 February 2026 02:24:22 +0000 (0:00:01.566) 0:01:12.796 ***** 2026-02-05 02:24:27.959368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959438 | orchestrator | 2026-02-05 02:24:27.959442 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-05 02:24:27.959446 | orchestrator | Thursday 05 February 2026 02:24:25 +0000 (0:00:03.579) 0:01:16.375 ***** 2026-02-05 02:24:27.959450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:27.959485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:55.603091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:55.603261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:55.603269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:55.603274 | orchestrator | 2026-02-05 02:24:55.603280 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 02:24:55.603286 | orchestrator | Thursday 05 February 2026 02:24:27 +0000 (0:00:02.124) 0:01:18.499 ***** 2026-02-05 02:24:55.603290 | orchestrator | 2026-02-05 02:24:55.603293 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 02:24:55.603297 | orchestrator | Thursday 05 February 2026 02:24:27 +0000 (0:00:00.067) 0:01:18.566 ***** 2026-02-05 02:24:55.603301 | orchestrator | 2026-02-05 02:24:55.603305 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 02:24:55.603308 | orchestrator | Thursday 05 February 2026 02:24:27 +0000 (0:00:00.057) 0:01:18.624 ***** 2026-02-05 02:24:55.603312 | orchestrator | 2026-02-05 02:24:55.603316 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-05 02:24:55.603320 | orchestrator | Thursday 05 February 2026 02:24:27 +0000 (0:00:00.061) 0:01:18.685 ***** 2026-02-05 02:24:55.603324 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:24:55.603329 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:24:55.603333 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:24:55.603337 | orchestrator | 2026-02-05 02:24:55.603341 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-05 02:24:55.603344 | orchestrator | Thursday 05 February 2026 02:24:35 +0000 (0:00:07.517) 0:01:26.203 ***** 2026-02-05 02:24:55.603348 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:24:55.603352 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:24:55.603356 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:24:55.603359 | orchestrator | 2026-02-05 02:24:55.603363 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-05 02:24:55.603367 | orchestrator | Thursday 05 February 2026 02:24:42 +0000 (0:00:06.592) 0:01:32.796 ***** 2026-02-05 02:24:55.603371 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:24:55.603374 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:24:55.603378 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:24:55.603382 | orchestrator | 2026-02-05 02:24:55.603386 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-05 02:24:55.603389 | orchestrator | Thursday 05 February 2026 02:24:49 +0000 (0:00:07.379) 0:01:40.176 ***** 2026-02-05 02:24:55.603393 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:24:55.603397 | orchestrator | 2026-02-05 02:24:55.603401 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-05 02:24:55.603404 | orchestrator | Thursday 05 February 2026 02:24:49 +0000 (0:00:00.137) 0:01:40.313 ***** 2026-02-05 02:24:55.603408 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:55.603413 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:55.603418 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:55.603422 | orchestrator | 2026-02-05 02:24:55.603425 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-05 02:24:55.603429 | orchestrator | Thursday 05 February 2026 02:24:50 +0000 (0:00:00.766) 0:01:41.079 ***** 2026-02-05 02:24:55.603433 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:55.603441 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:55.603445 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:24:55.603449 | orchestrator | 2026-02-05 02:24:55.603453 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-05 02:24:55.603456 | orchestrator | Thursday 05 February 2026 02:24:50 +0000 (0:00:00.600) 0:01:41.680 ***** 2026-02-05 02:24:55.603460 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:55.603464 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:55.603468 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:55.603471 | orchestrator | 2026-02-05 02:24:55.603475 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-05 02:24:55.603494 | orchestrator | Thursday 05 February 2026 02:24:51 +0000 (0:00:00.735) 0:01:42.415 ***** 2026-02-05 02:24:55.603498 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:24:55.603502 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:24:55.603506 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:24:55.603510 | orchestrator | 2026-02-05 02:24:55.603513 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-05 02:24:55.603517 | orchestrator | Thursday 05 February 2026 02:24:52 +0000 (0:00:00.643) 0:01:43.059 ***** 2026-02-05 02:24:55.603521 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:55.603525 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:55.603544 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:55.603548 | orchestrator | 2026-02-05 02:24:55.603552 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-05 02:24:55.603556 | orchestrator | Thursday 05 February 2026 02:24:53 +0000 (0:00:00.712) 0:01:43.771 ***** 2026-02-05 02:24:55.603560 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:55.603563 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:55.603567 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:55.603571 | orchestrator | 2026-02-05 02:24:55.603575 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-05 02:24:55.603579 | orchestrator | Thursday 05 February 2026 02:24:53 +0000 (0:00:00.931) 0:01:44.703 ***** 2026-02-05 02:24:55.603583 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:24:55.603587 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:24:55.603590 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:24:55.603594 | orchestrator | 2026-02-05 02:24:55.603598 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-05 02:24:55.603601 | orchestrator | Thursday 05 February 2026 02:24:54 +0000 (0:00:00.230) 0:01:44.934 ***** 2026-02-05 02:24:55.603607 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:55.603613 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:55.603618 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:55.603623 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:55.603633 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:55.603637 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:55.603642 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:55.603650 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:24:55.603661 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405425 | orchestrator | 2026-02-05 02:25:02.405511 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-05 02:25:02.405519 | orchestrator | Thursday 05 February 2026 02:24:55 +0000 (0:00:01.393) 0:01:46.327 ***** 2026-02-05 02:25:02.405526 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405533 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405537 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405541 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405574 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405596 | orchestrator | 2026-02-05 02:25:02.405600 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-05 02:25:02.405603 | orchestrator | Thursday 05 February 2026 02:24:59 +0000 (0:00:03.663) 0:01:49.990 ***** 2026-02-05 02:25:02.405618 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405622 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405626 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405642 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405656 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 02:25:02.405660 | orchestrator | 2026-02-05 02:25:02.405664 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 02:25:02.405668 | orchestrator | Thursday 05 February 2026 02:25:02 +0000 (0:00:02.826) 0:01:52.817 ***** 2026-02-05 02:25:02.405672 | orchestrator | 2026-02-05 02:25:02.405676 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 02:25:02.405680 | orchestrator | Thursday 05 February 2026 02:25:02 +0000 (0:00:00.180) 0:01:52.998 ***** 2026-02-05 02:25:02.405683 | orchestrator | 2026-02-05 02:25:02.405687 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 02:25:02.405691 | orchestrator | Thursday 05 February 2026 02:25:02 +0000 (0:00:00.059) 0:01:53.057 ***** 2026-02-05 02:25:02.405695 | orchestrator | 2026-02-05 02:25:02.405701 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-05 02:25:26.414339 | orchestrator | Thursday 05 February 2026 02:25:02 +0000 (0:00:00.058) 0:01:53.116 ***** 2026-02-05 02:25:26.414400 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:25:26.414410 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:25:26.414414 | orchestrator | 2026-02-05 02:25:26.414419 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-05 02:25:26.414424 | orchestrator | Thursday 05 February 2026 02:25:08 +0000 (0:00:06.267) 0:01:59.383 ***** 2026-02-05 02:25:26.414428 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:25:26.414432 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:25:26.414436 | orchestrator | 2026-02-05 02:25:26.414440 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-05 02:25:26.414458 | orchestrator | Thursday 05 February 2026 02:25:14 +0000 (0:00:06.299) 0:02:05.683 ***** 2026-02-05 02:25:26.414462 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:25:26.414466 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:25:26.414470 | orchestrator | 2026-02-05 02:25:26.414474 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-05 02:25:26.414478 | orchestrator | Thursday 05 February 2026 02:25:21 +0000 (0:00:06.132) 0:02:11.815 ***** 2026-02-05 02:25:26.414482 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:25:26.414486 | orchestrator | 2026-02-05 02:25:26.414490 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-05 02:25:26.414502 | orchestrator | Thursday 05 February 2026 02:25:21 +0000 (0:00:00.137) 0:02:11.953 ***** 2026-02-05 02:25:26.414506 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:25:26.414511 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:25:26.414515 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:25:26.414519 | orchestrator | 2026-02-05 02:25:26.414523 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-05 02:25:26.414526 | orchestrator | Thursday 05 February 2026 02:25:21 +0000 (0:00:00.756) 0:02:12.709 ***** 2026-02-05 02:25:26.414530 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:25:26.414534 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:25:26.414538 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:25:26.414542 | orchestrator | 2026-02-05 02:25:26.414545 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-05 02:25:26.414549 | orchestrator | Thursday 05 February 2026 02:25:22 +0000 (0:00:00.645) 0:02:13.355 ***** 2026-02-05 02:25:26.414553 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:25:26.414557 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:25:26.414561 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:25:26.414565 | orchestrator | 2026-02-05 02:25:26.414569 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-05 02:25:26.414572 | orchestrator | Thursday 05 February 2026 02:25:23 +0000 (0:00:00.768) 0:02:14.123 ***** 2026-02-05 02:25:26.414576 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:25:26.414580 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:25:26.414584 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:25:26.414588 | orchestrator | 2026-02-05 02:25:26.414591 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-05 02:25:26.414595 | orchestrator | Thursday 05 February 2026 02:25:24 +0000 (0:00:00.646) 0:02:14.770 ***** 2026-02-05 02:25:26.414599 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:25:26.414603 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:25:26.414607 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:25:26.414610 | orchestrator | 2026-02-05 02:25:26.414614 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-05 02:25:26.414618 | orchestrator | Thursday 05 February 2026 02:25:24 +0000 (0:00:00.881) 0:02:15.652 ***** 2026-02-05 02:25:26.414622 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:25:26.414625 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:25:26.414629 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:25:26.414633 | orchestrator | 2026-02-05 02:25:26.414637 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:25:26.414642 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-05 02:25:26.414647 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-05 02:25:26.414651 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-05 02:25:26.414655 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:25:26.414664 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:25:26.414668 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:25:26.414671 | orchestrator | 2026-02-05 02:25:26.414675 | orchestrator | 2026-02-05 02:25:26.414690 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:25:26.414694 | orchestrator | Thursday 05 February 2026 02:25:26 +0000 (0:00:01.266) 0:02:16.918 ***** 2026-02-05 02:25:26.414698 | orchestrator | =============================================================================== 2026-02-05 02:25:26.414702 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 25.27s 2026-02-05 02:25:26.414706 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.41s 2026-02-05 02:25:26.414710 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.79s 2026-02-05 02:25:26.414714 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.51s 2026-02-05 02:25:26.414717 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.89s 2026-02-05 02:25:26.414730 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.66s 2026-02-05 02:25:26.414734 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.58s 2026-02-05 02:25:26.414738 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.88s 2026-02-05 02:25:26.414742 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.83s 2026-02-05 02:25:26.414746 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.12s 2026-02-05 02:25:26.414750 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.61s 2026-02-05 02:25:26.414754 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.61s 2026-02-05 02:25:26.414757 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.61s 2026-02-05 02:25:26.414761 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.57s 2026-02-05 02:25:26.414765 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2026-02-05 02:25:26.414769 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.30s 2026-02-05 02:25:26.414773 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.27s 2026-02-05 02:25:26.414776 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.17s 2026-02-05 02:25:26.414780 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.11s 2026-02-05 02:25:26.414784 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.08s 2026-02-05 02:25:26.610171 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-05 02:25:26.610271 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-05 02:25:28.510618 | orchestrator | 2026-02-05 02:25:28 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-05 02:25:38.661812 | orchestrator | 2026-02-05 02:25:38 | INFO  | Task 1e7c1521-a765-449e-a006-b8c2cc6083be (wipe-partitions) was prepared for execution. 2026-02-05 02:25:38.661897 | orchestrator | 2026-02-05 02:25:38 | INFO  | It takes a moment until task 1e7c1521-a765-449e-a006-b8c2cc6083be (wipe-partitions) has been started and output is visible here. 2026-02-05 02:25:52.449050 | orchestrator | 2026-02-05 02:25:52.449188 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-05 02:25:52.449205 | orchestrator | 2026-02-05 02:25:52.449215 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-05 02:25:52.449224 | orchestrator | Thursday 05 February 2026 02:25:42 +0000 (0:00:00.125) 0:00:00.125 ***** 2026-02-05 02:25:52.449258 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:25:52.449269 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:25:52.449276 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:25:52.449285 | orchestrator | 2026-02-05 02:25:52.449293 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-05 02:25:52.449301 | orchestrator | Thursday 05 February 2026 02:25:44 +0000 (0:00:01.636) 0:00:01.762 ***** 2026-02-05 02:25:52.449310 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:25:52.449318 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:25:52.449326 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:25:52.449334 | orchestrator | 2026-02-05 02:25:52.449343 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-05 02:25:52.449352 | orchestrator | Thursday 05 February 2026 02:25:44 +0000 (0:00:00.405) 0:00:02.168 ***** 2026-02-05 02:25:52.449360 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:25:52.449370 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:25:52.449377 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:25:52.449385 | orchestrator | 2026-02-05 02:25:52.449394 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-05 02:25:52.449402 | orchestrator | Thursday 05 February 2026 02:25:45 +0000 (0:00:00.696) 0:00:02.864 ***** 2026-02-05 02:25:52.449410 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:25:52.449419 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:25:52.449428 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:25:52.449438 | orchestrator | 2026-02-05 02:25:52.449447 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-05 02:25:52.449455 | orchestrator | Thursday 05 February 2026 02:25:45 +0000 (0:00:00.254) 0:00:03.119 ***** 2026-02-05 02:25:52.449477 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-05 02:25:52.449493 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-05 02:25:52.449501 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-05 02:25:52.449509 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-05 02:25:52.449517 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-05 02:25:52.449524 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-05 02:25:52.449547 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-05 02:25:52.449555 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-05 02:25:52.449564 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-05 02:25:52.449571 | orchestrator | 2026-02-05 02:25:52.449579 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-05 02:25:52.449587 | orchestrator | Thursday 05 February 2026 02:25:47 +0000 (0:00:01.287) 0:00:04.406 ***** 2026-02-05 02:25:52.449595 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-05 02:25:52.449604 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-05 02:25:52.449612 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-05 02:25:52.449620 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-05 02:25:52.449629 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-05 02:25:52.449637 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-05 02:25:52.449646 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-05 02:25:52.449655 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-05 02:25:52.449664 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-05 02:25:52.449672 | orchestrator | 2026-02-05 02:25:52.449682 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-05 02:25:52.449690 | orchestrator | Thursday 05 February 2026 02:25:48 +0000 (0:00:01.573) 0:00:05.980 ***** 2026-02-05 02:25:52.449700 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-05 02:25:52.449708 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-05 02:25:52.449717 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-05 02:25:52.449725 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-05 02:25:52.449743 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-05 02:25:52.449751 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-05 02:25:52.449758 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-05 02:25:52.449767 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-05 02:25:52.449776 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-05 02:25:52.449784 | orchestrator | 2026-02-05 02:25:52.449792 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-05 02:25:52.449802 | orchestrator | Thursday 05 February 2026 02:25:50 +0000 (0:00:02.176) 0:00:08.156 ***** 2026-02-05 02:25:52.449810 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:25:52.449819 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:25:52.449828 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:25:52.449836 | orchestrator | 2026-02-05 02:25:52.449849 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-05 02:25:52.449865 | orchestrator | Thursday 05 February 2026 02:25:51 +0000 (0:00:00.634) 0:00:08.791 ***** 2026-02-05 02:25:52.449876 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:25:52.449888 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:25:52.449895 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:25:52.449903 | orchestrator | 2026-02-05 02:25:52.449911 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:25:52.449920 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:25:52.449931 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:25:52.449959 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:25:52.449968 | orchestrator | 2026-02-05 02:25:52.449976 | orchestrator | 2026-02-05 02:25:52.449984 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:25:52.449991 | orchestrator | Thursday 05 February 2026 02:25:52 +0000 (0:00:00.660) 0:00:09.451 ***** 2026-02-05 02:25:52.449998 | orchestrator | =============================================================================== 2026-02-05 02:25:52.450006 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.18s 2026-02-05 02:25:52.450067 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.64s 2026-02-05 02:25:52.450079 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.57s 2026-02-05 02:25:52.450087 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2026-02-05 02:25:52.450095 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.70s 2026-02-05 02:25:52.450104 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2026-02-05 02:25:52.450109 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-02-05 02:25:52.450115 | orchestrator | Remove all rook related logical devices --------------------------------- 0.41s 2026-02-05 02:25:52.450200 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-02-05 02:26:04.830326 | orchestrator | 2026-02-05 02:26:04 | INFO  | Task 656a5b19-8d86-4c93-954f-b4e660267e34 (facts) was prepared for execution. 2026-02-05 02:26:04.830403 | orchestrator | 2026-02-05 02:26:04 | INFO  | It takes a moment until task 656a5b19-8d86-4c93-954f-b4e660267e34 (facts) has been started and output is visible here. 2026-02-05 02:26:17.937998 | orchestrator | 2026-02-05 02:26:17.938272 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-05 02:26:17.938292 | orchestrator | 2026-02-05 02:26:17.938304 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-05 02:26:17.938316 | orchestrator | Thursday 05 February 2026 02:26:09 +0000 (0:00:00.264) 0:00:00.264 ***** 2026-02-05 02:26:17.938364 | orchestrator | ok: [testbed-manager] 2026-02-05 02:26:17.938386 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:26:17.938405 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:26:17.938423 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:26:17.938441 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:26:17.938458 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:26:17.938478 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:26:17.938496 | orchestrator | 2026-02-05 02:26:17.938517 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-05 02:26:17.938538 | orchestrator | Thursday 05 February 2026 02:26:10 +0000 (0:00:01.120) 0:00:01.385 ***** 2026-02-05 02:26:17.938559 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:26:17.938575 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:26:17.938588 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:26:17.938599 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:26:17.938610 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:17.938620 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:17.938632 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:26:17.938642 | orchestrator | 2026-02-05 02:26:17.938653 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 02:26:17.938664 | orchestrator | 2026-02-05 02:26:17.938675 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 02:26:17.938686 | orchestrator | Thursday 05 February 2026 02:26:11 +0000 (0:00:01.334) 0:00:02.719 ***** 2026-02-05 02:26:17.938697 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:26:17.938707 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:26:17.938718 | orchestrator | ok: [testbed-manager] 2026-02-05 02:26:17.938729 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:26:17.938740 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:26:17.938750 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:26:17.938761 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:26:17.938772 | orchestrator | 2026-02-05 02:26:17.938783 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-05 02:26:17.938794 | orchestrator | 2026-02-05 02:26:17.938804 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-05 02:26:17.938815 | orchestrator | Thursday 05 February 2026 02:26:17 +0000 (0:00:05.336) 0:00:08.056 ***** 2026-02-05 02:26:17.938826 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:26:17.938837 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:26:17.938848 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:26:17.938859 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:26:17.938870 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:17.938881 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:17.938891 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:26:17.938902 | orchestrator | 2026-02-05 02:26:17.938913 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:26:17.938924 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:26:17.938988 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:26:17.939001 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:26:17.939012 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:26:17.939023 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:26:17.939034 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:26:17.939056 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:26:17.939067 | orchestrator | 2026-02-05 02:26:17.939078 | orchestrator | 2026-02-05 02:26:17.939089 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:26:17.939100 | orchestrator | Thursday 05 February 2026 02:26:17 +0000 (0:00:00.521) 0:00:08.577 ***** 2026-02-05 02:26:17.939111 | orchestrator | =============================================================================== 2026-02-05 02:26:17.939149 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.34s 2026-02-05 02:26:17.939161 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.33s 2026-02-05 02:26:17.939172 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-02-05 02:26:17.939183 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-02-05 02:26:20.364391 | orchestrator | 2026-02-05 02:26:20 | INFO  | Task e0c2ee85-f4c4-4a41-a4e5-ab3488972cfa (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-05 02:26:20.364475 | orchestrator | 2026-02-05 02:26:20 | INFO  | It takes a moment until task e0c2ee85-f4c4-4a41-a4e5-ab3488972cfa (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-05 02:26:32.620973 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 02:26:32.621053 | orchestrator | 2.16.14 2026-02-05 02:26:32.621060 | orchestrator | 2026-02-05 02:26:32.621065 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-05 02:26:32.621070 | orchestrator | 2026-02-05 02:26:32.621075 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 02:26:32.621079 | orchestrator | Thursday 05 February 2026 02:26:24 +0000 (0:00:00.320) 0:00:00.320 ***** 2026-02-05 02:26:32.621085 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 02:26:32.621089 | orchestrator | 2026-02-05 02:26:32.621104 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 02:26:32.621108 | orchestrator | Thursday 05 February 2026 02:26:24 +0000 (0:00:00.264) 0:00:00.584 ***** 2026-02-05 02:26:32.621112 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:26:32.621145 | orchestrator | 2026-02-05 02:26:32.621150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621154 | orchestrator | Thursday 05 February 2026 02:26:25 +0000 (0:00:00.233) 0:00:00.818 ***** 2026-02-05 02:26:32.621158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-05 02:26:32.621163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-05 02:26:32.621166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-05 02:26:32.621170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-05 02:26:32.621174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-05 02:26:32.621178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-05 02:26:32.621182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-05 02:26:32.621185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-05 02:26:32.621190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-05 02:26:32.621193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-05 02:26:32.621197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-05 02:26:32.621201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-05 02:26:32.621220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-05 02:26:32.621224 | orchestrator | 2026-02-05 02:26:32.621228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621232 | orchestrator | Thursday 05 February 2026 02:26:25 +0000 (0:00:00.483) 0:00:01.302 ***** 2026-02-05 02:26:32.621236 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621240 | orchestrator | 2026-02-05 02:26:32.621244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621247 | orchestrator | Thursday 05 February 2026 02:26:25 +0000 (0:00:00.199) 0:00:01.502 ***** 2026-02-05 02:26:32.621251 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621255 | orchestrator | 2026-02-05 02:26:32.621259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621262 | orchestrator | Thursday 05 February 2026 02:26:26 +0000 (0:00:00.203) 0:00:01.705 ***** 2026-02-05 02:26:32.621266 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621270 | orchestrator | 2026-02-05 02:26:32.621274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621278 | orchestrator | Thursday 05 February 2026 02:26:26 +0000 (0:00:00.217) 0:00:01.923 ***** 2026-02-05 02:26:32.621281 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621285 | orchestrator | 2026-02-05 02:26:32.621289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621292 | orchestrator | Thursday 05 February 2026 02:26:26 +0000 (0:00:00.210) 0:00:02.133 ***** 2026-02-05 02:26:32.621296 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621300 | orchestrator | 2026-02-05 02:26:32.621304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621307 | orchestrator | Thursday 05 February 2026 02:26:26 +0000 (0:00:00.210) 0:00:02.344 ***** 2026-02-05 02:26:32.621311 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621315 | orchestrator | 2026-02-05 02:26:32.621318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621322 | orchestrator | Thursday 05 February 2026 02:26:26 +0000 (0:00:00.211) 0:00:02.555 ***** 2026-02-05 02:26:32.621326 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621330 | orchestrator | 2026-02-05 02:26:32.621334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621337 | orchestrator | Thursday 05 February 2026 02:26:27 +0000 (0:00:00.211) 0:00:02.766 ***** 2026-02-05 02:26:32.621341 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621345 | orchestrator | 2026-02-05 02:26:32.621348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621352 | orchestrator | Thursday 05 February 2026 02:26:27 +0000 (0:00:00.195) 0:00:02.962 ***** 2026-02-05 02:26:32.621356 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97) 2026-02-05 02:26:32.621361 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97) 2026-02-05 02:26:32.621365 | orchestrator | 2026-02-05 02:26:32.621369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621383 | orchestrator | Thursday 05 February 2026 02:26:27 +0000 (0:00:00.631) 0:00:03.594 ***** 2026-02-05 02:26:32.621387 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2) 2026-02-05 02:26:32.621391 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2) 2026-02-05 02:26:32.621395 | orchestrator | 2026-02-05 02:26:32.621399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621403 | orchestrator | Thursday 05 February 2026 02:26:28 +0000 (0:00:00.683) 0:00:04.277 ***** 2026-02-05 02:26:32.621410 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b) 2026-02-05 02:26:32.621418 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b) 2026-02-05 02:26:32.621422 | orchestrator | 2026-02-05 02:26:32.621426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621429 | orchestrator | Thursday 05 February 2026 02:26:29 +0000 (0:00:00.853) 0:00:05.130 ***** 2026-02-05 02:26:32.621433 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff) 2026-02-05 02:26:32.621437 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff) 2026-02-05 02:26:32.621441 | orchestrator | 2026-02-05 02:26:32.621445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:32.621448 | orchestrator | Thursday 05 February 2026 02:26:29 +0000 (0:00:00.463) 0:00:05.594 ***** 2026-02-05 02:26:32.621452 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 02:26:32.621456 | orchestrator | 2026-02-05 02:26:32.621460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:32.621464 | orchestrator | Thursday 05 February 2026 02:26:30 +0000 (0:00:00.349) 0:00:05.944 ***** 2026-02-05 02:26:32.621467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-05 02:26:32.621471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-05 02:26:32.621475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-05 02:26:32.621479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-05 02:26:32.621482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-05 02:26:32.621486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-05 02:26:32.621490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-05 02:26:32.621494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-05 02:26:32.621497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-05 02:26:32.621501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-05 02:26:32.621505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-05 02:26:32.621509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-05 02:26:32.621512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-05 02:26:32.621516 | orchestrator | 2026-02-05 02:26:32.621520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:32.621524 | orchestrator | Thursday 05 February 2026 02:26:30 +0000 (0:00:00.388) 0:00:06.333 ***** 2026-02-05 02:26:32.621528 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621531 | orchestrator | 2026-02-05 02:26:32.621535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:32.621539 | orchestrator | Thursday 05 February 2026 02:26:30 +0000 (0:00:00.226) 0:00:06.559 ***** 2026-02-05 02:26:32.621543 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621546 | orchestrator | 2026-02-05 02:26:32.621551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:32.621556 | orchestrator | Thursday 05 February 2026 02:26:31 +0000 (0:00:00.212) 0:00:06.771 ***** 2026-02-05 02:26:32.621560 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621564 | orchestrator | 2026-02-05 02:26:32.621569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:32.621573 | orchestrator | Thursday 05 February 2026 02:26:31 +0000 (0:00:00.224) 0:00:06.995 ***** 2026-02-05 02:26:32.621578 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621585 | orchestrator | 2026-02-05 02:26:32.621590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:32.621594 | orchestrator | Thursday 05 February 2026 02:26:31 +0000 (0:00:00.215) 0:00:07.211 ***** 2026-02-05 02:26:32.621599 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621603 | orchestrator | 2026-02-05 02:26:32.621607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:32.621612 | orchestrator | Thursday 05 February 2026 02:26:31 +0000 (0:00:00.215) 0:00:07.427 ***** 2026-02-05 02:26:32.621616 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621621 | orchestrator | 2026-02-05 02:26:32.621625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:32.621630 | orchestrator | Thursday 05 February 2026 02:26:32 +0000 (0:00:00.579) 0:00:08.006 ***** 2026-02-05 02:26:32.621634 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:32.621638 | orchestrator | 2026-02-05 02:26:32.621645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:39.888521 | orchestrator | Thursday 05 February 2026 02:26:32 +0000 (0:00:00.206) 0:00:08.213 ***** 2026-02-05 02:26:39.888632 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.888651 | orchestrator | 2026-02-05 02:26:39.888665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:39.888677 | orchestrator | Thursday 05 February 2026 02:26:32 +0000 (0:00:00.205) 0:00:08.419 ***** 2026-02-05 02:26:39.888688 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-05 02:26:39.888700 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-05 02:26:39.888712 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-05 02:26:39.888738 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-05 02:26:39.888750 | orchestrator | 2026-02-05 02:26:39.888762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:39.888773 | orchestrator | Thursday 05 February 2026 02:26:33 +0000 (0:00:00.673) 0:00:09.092 ***** 2026-02-05 02:26:39.888784 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.888795 | orchestrator | 2026-02-05 02:26:39.888807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:39.888818 | orchestrator | Thursday 05 February 2026 02:26:33 +0000 (0:00:00.216) 0:00:09.309 ***** 2026-02-05 02:26:39.888829 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.888840 | orchestrator | 2026-02-05 02:26:39.888852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:39.888863 | orchestrator | Thursday 05 February 2026 02:26:33 +0000 (0:00:00.217) 0:00:09.526 ***** 2026-02-05 02:26:39.888874 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.888885 | orchestrator | 2026-02-05 02:26:39.888896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:39.888907 | orchestrator | Thursday 05 February 2026 02:26:34 +0000 (0:00:00.210) 0:00:09.736 ***** 2026-02-05 02:26:39.888918 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.888929 | orchestrator | 2026-02-05 02:26:39.888940 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-05 02:26:39.888952 | orchestrator | Thursday 05 February 2026 02:26:34 +0000 (0:00:00.214) 0:00:09.951 ***** 2026-02-05 02:26:39.888963 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-05 02:26:39.888974 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-05 02:26:39.888985 | orchestrator | 2026-02-05 02:26:39.888996 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-05 02:26:39.889008 | orchestrator | Thursday 05 February 2026 02:26:34 +0000 (0:00:00.185) 0:00:10.136 ***** 2026-02-05 02:26:39.889019 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.889030 | orchestrator | 2026-02-05 02:26:39.889041 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-05 02:26:39.889052 | orchestrator | Thursday 05 February 2026 02:26:34 +0000 (0:00:00.143) 0:00:10.280 ***** 2026-02-05 02:26:39.889090 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.889104 | orchestrator | 2026-02-05 02:26:39.889161 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-05 02:26:39.889175 | orchestrator | Thursday 05 February 2026 02:26:34 +0000 (0:00:00.144) 0:00:10.425 ***** 2026-02-05 02:26:39.889187 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.889199 | orchestrator | 2026-02-05 02:26:39.889212 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-05 02:26:39.889225 | orchestrator | Thursday 05 February 2026 02:26:35 +0000 (0:00:00.328) 0:00:10.753 ***** 2026-02-05 02:26:39.889237 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:26:39.889250 | orchestrator | 2026-02-05 02:26:39.889263 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-05 02:26:39.889276 | orchestrator | Thursday 05 February 2026 02:26:35 +0000 (0:00:00.150) 0:00:10.904 ***** 2026-02-05 02:26:39.889289 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de37fca4-ea41-596c-ab1a-50038d0e278e'}}) 2026-02-05 02:26:39.889303 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '825a1c54-3e62-51fa-b7a4-9af3e8833567'}}) 2026-02-05 02:26:39.889316 | orchestrator | 2026-02-05 02:26:39.889328 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-05 02:26:39.889340 | orchestrator | Thursday 05 February 2026 02:26:35 +0000 (0:00:00.182) 0:00:11.087 ***** 2026-02-05 02:26:39.889354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de37fca4-ea41-596c-ab1a-50038d0e278e'}})  2026-02-05 02:26:39.889369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '825a1c54-3e62-51fa-b7a4-9af3e8833567'}})  2026-02-05 02:26:39.889382 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.889394 | orchestrator | 2026-02-05 02:26:39.889407 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-05 02:26:39.889419 | orchestrator | Thursday 05 February 2026 02:26:35 +0000 (0:00:00.158) 0:00:11.245 ***** 2026-02-05 02:26:39.889429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de37fca4-ea41-596c-ab1a-50038d0e278e'}})  2026-02-05 02:26:39.889441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '825a1c54-3e62-51fa-b7a4-9af3e8833567'}})  2026-02-05 02:26:39.889451 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.889463 | orchestrator | 2026-02-05 02:26:39.889474 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-05 02:26:39.889484 | orchestrator | Thursday 05 February 2026 02:26:35 +0000 (0:00:00.153) 0:00:11.399 ***** 2026-02-05 02:26:39.889495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de37fca4-ea41-596c-ab1a-50038d0e278e'}})  2026-02-05 02:26:39.889523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '825a1c54-3e62-51fa-b7a4-9af3e8833567'}})  2026-02-05 02:26:39.889534 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.889545 | orchestrator | 2026-02-05 02:26:39.889557 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-05 02:26:39.889568 | orchestrator | Thursday 05 February 2026 02:26:35 +0000 (0:00:00.176) 0:00:11.576 ***** 2026-02-05 02:26:39.889579 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:26:39.889590 | orchestrator | 2026-02-05 02:26:39.889600 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-05 02:26:39.889617 | orchestrator | Thursday 05 February 2026 02:26:36 +0000 (0:00:00.147) 0:00:11.724 ***** 2026-02-05 02:26:39.889628 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:26:39.889639 | orchestrator | 2026-02-05 02:26:39.889650 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-05 02:26:39.889661 | orchestrator | Thursday 05 February 2026 02:26:36 +0000 (0:00:00.149) 0:00:11.873 ***** 2026-02-05 02:26:39.889681 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.889693 | orchestrator | 2026-02-05 02:26:39.889704 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-05 02:26:39.889714 | orchestrator | Thursday 05 February 2026 02:26:36 +0000 (0:00:00.131) 0:00:12.005 ***** 2026-02-05 02:26:39.889725 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.889736 | orchestrator | 2026-02-05 02:26:39.889746 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-05 02:26:39.889757 | orchestrator | Thursday 05 February 2026 02:26:36 +0000 (0:00:00.140) 0:00:12.146 ***** 2026-02-05 02:26:39.889768 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.889778 | orchestrator | 2026-02-05 02:26:39.889789 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-05 02:26:39.889800 | orchestrator | Thursday 05 February 2026 02:26:36 +0000 (0:00:00.142) 0:00:12.288 ***** 2026-02-05 02:26:39.889811 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 02:26:39.889822 | orchestrator |  "ceph_osd_devices": { 2026-02-05 02:26:39.889833 | orchestrator |  "sdb": { 2026-02-05 02:26:39.889845 | orchestrator |  "osd_lvm_uuid": "de37fca4-ea41-596c-ab1a-50038d0e278e" 2026-02-05 02:26:39.889856 | orchestrator |  }, 2026-02-05 02:26:39.889867 | orchestrator |  "sdc": { 2026-02-05 02:26:39.889878 | orchestrator |  "osd_lvm_uuid": "825a1c54-3e62-51fa-b7a4-9af3e8833567" 2026-02-05 02:26:39.889888 | orchestrator |  } 2026-02-05 02:26:39.889899 | orchestrator |  } 2026-02-05 02:26:39.889910 | orchestrator | } 2026-02-05 02:26:39.889921 | orchestrator | 2026-02-05 02:26:39.889932 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-05 02:26:39.889943 | orchestrator | Thursday 05 February 2026 02:26:36 +0000 (0:00:00.310) 0:00:12.599 ***** 2026-02-05 02:26:39.889954 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.889964 | orchestrator | 2026-02-05 02:26:39.889975 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-05 02:26:39.889986 | orchestrator | Thursday 05 February 2026 02:26:37 +0000 (0:00:00.140) 0:00:12.739 ***** 2026-02-05 02:26:39.889997 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.890007 | orchestrator | 2026-02-05 02:26:39.890095 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-05 02:26:39.890107 | orchestrator | Thursday 05 February 2026 02:26:37 +0000 (0:00:00.138) 0:00:12.877 ***** 2026-02-05 02:26:39.890148 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:26:39.890160 | orchestrator | 2026-02-05 02:26:39.890171 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-05 02:26:39.890182 | orchestrator | Thursday 05 February 2026 02:26:37 +0000 (0:00:00.129) 0:00:13.007 ***** 2026-02-05 02:26:39.890193 | orchestrator | changed: [testbed-node-3] => { 2026-02-05 02:26:39.890204 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-05 02:26:39.890215 | orchestrator |  "ceph_osd_devices": { 2026-02-05 02:26:39.890226 | orchestrator |  "sdb": { 2026-02-05 02:26:39.890237 | orchestrator |  "osd_lvm_uuid": "de37fca4-ea41-596c-ab1a-50038d0e278e" 2026-02-05 02:26:39.890254 | orchestrator |  }, 2026-02-05 02:26:39.890273 | orchestrator |  "sdc": { 2026-02-05 02:26:39.890291 | orchestrator |  "osd_lvm_uuid": "825a1c54-3e62-51fa-b7a4-9af3e8833567" 2026-02-05 02:26:39.890309 | orchestrator |  } 2026-02-05 02:26:39.890327 | orchestrator |  }, 2026-02-05 02:26:39.890345 | orchestrator |  "lvm_volumes": [ 2026-02-05 02:26:39.890363 | orchestrator |  { 2026-02-05 02:26:39.890379 | orchestrator |  "data": "osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e", 2026-02-05 02:26:39.890398 | orchestrator |  "data_vg": "ceph-de37fca4-ea41-596c-ab1a-50038d0e278e" 2026-02-05 02:26:39.890416 | orchestrator |  }, 2026-02-05 02:26:39.890434 | orchestrator |  { 2026-02-05 02:26:39.890453 | orchestrator |  "data": "osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567", 2026-02-05 02:26:39.890485 | orchestrator |  "data_vg": "ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567" 2026-02-05 02:26:39.890504 | orchestrator |  } 2026-02-05 02:26:39.890523 | orchestrator |  ] 2026-02-05 02:26:39.890537 | orchestrator |  } 2026-02-05 02:26:39.890548 | orchestrator | } 2026-02-05 02:26:39.890559 | orchestrator | 2026-02-05 02:26:39.890570 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-05 02:26:39.890581 | orchestrator | Thursday 05 February 2026 02:26:37 +0000 (0:00:00.217) 0:00:13.225 ***** 2026-02-05 02:26:39.890592 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 02:26:39.890603 | orchestrator | 2026-02-05 02:26:39.890614 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-05 02:26:39.890624 | orchestrator | 2026-02-05 02:26:39.890636 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 02:26:39.890646 | orchestrator | Thursday 05 February 2026 02:26:39 +0000 (0:00:01.786) 0:00:15.011 ***** 2026-02-05 02:26:39.890657 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-05 02:26:39.890668 | orchestrator | 2026-02-05 02:26:39.890678 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 02:26:39.890689 | orchestrator | Thursday 05 February 2026 02:26:39 +0000 (0:00:00.242) 0:00:15.254 ***** 2026-02-05 02:26:39.890700 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:26:39.890711 | orchestrator | 2026-02-05 02:26:39.890734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.433186 | orchestrator | Thursday 05 February 2026 02:26:39 +0000 (0:00:00.230) 0:00:15.484 ***** 2026-02-05 02:26:48.433303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-05 02:26:48.433321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-05 02:26:48.433333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-05 02:26:48.433363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-05 02:26:48.433374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-05 02:26:48.433385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-05 02:26:48.433397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-05 02:26:48.433408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-05 02:26:48.433419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-05 02:26:48.433429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-05 02:26:48.433440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-05 02:26:48.433451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-05 02:26:48.433462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-05 02:26:48.433473 | orchestrator | 2026-02-05 02:26:48.433485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.433499 | orchestrator | Thursday 05 February 2026 02:26:40 +0000 (0:00:00.557) 0:00:16.042 ***** 2026-02-05 02:26:48.433518 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.433531 | orchestrator | 2026-02-05 02:26:48.433548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.433562 | orchestrator | Thursday 05 February 2026 02:26:40 +0000 (0:00:00.205) 0:00:16.247 ***** 2026-02-05 02:26:48.433573 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.433584 | orchestrator | 2026-02-05 02:26:48.433595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.433606 | orchestrator | Thursday 05 February 2026 02:26:40 +0000 (0:00:00.214) 0:00:16.462 ***** 2026-02-05 02:26:48.433640 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.433652 | orchestrator | 2026-02-05 02:26:48.433663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.433674 | orchestrator | Thursday 05 February 2026 02:26:41 +0000 (0:00:00.211) 0:00:16.673 ***** 2026-02-05 02:26:48.433687 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.433700 | orchestrator | 2026-02-05 02:26:48.433713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.433726 | orchestrator | Thursday 05 February 2026 02:26:41 +0000 (0:00:00.196) 0:00:16.870 ***** 2026-02-05 02:26:48.433738 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.433751 | orchestrator | 2026-02-05 02:26:48.433764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.433777 | orchestrator | Thursday 05 February 2026 02:26:41 +0000 (0:00:00.196) 0:00:17.067 ***** 2026-02-05 02:26:48.433790 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.433802 | orchestrator | 2026-02-05 02:26:48.433815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.433827 | orchestrator | Thursday 05 February 2026 02:26:41 +0000 (0:00:00.203) 0:00:17.270 ***** 2026-02-05 02:26:48.433840 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.433853 | orchestrator | 2026-02-05 02:26:48.433866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.433878 | orchestrator | Thursday 05 February 2026 02:26:41 +0000 (0:00:00.218) 0:00:17.489 ***** 2026-02-05 02:26:48.433891 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.433903 | orchestrator | 2026-02-05 02:26:48.433916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.433929 | orchestrator | Thursday 05 February 2026 02:26:42 +0000 (0:00:00.199) 0:00:17.688 ***** 2026-02-05 02:26:48.433942 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde) 2026-02-05 02:26:48.433957 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde) 2026-02-05 02:26:48.433969 | orchestrator | 2026-02-05 02:26:48.433982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.433995 | orchestrator | Thursday 05 February 2026 02:26:42 +0000 (0:00:00.634) 0:00:18.323 ***** 2026-02-05 02:26:48.434008 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a) 2026-02-05 02:26:48.434086 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a) 2026-02-05 02:26:48.434098 | orchestrator | 2026-02-05 02:26:48.434109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.434162 | orchestrator | Thursday 05 February 2026 02:26:43 +0000 (0:00:00.675) 0:00:18.999 ***** 2026-02-05 02:26:48.434186 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930) 2026-02-05 02:26:48.434205 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930) 2026-02-05 02:26:48.434222 | orchestrator | 2026-02-05 02:26:48.434240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.434283 | orchestrator | Thursday 05 February 2026 02:26:44 +0000 (0:00:00.884) 0:00:19.883 ***** 2026-02-05 02:26:48.434302 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a) 2026-02-05 02:26:48.434314 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a) 2026-02-05 02:26:48.434325 | orchestrator | 2026-02-05 02:26:48.434335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:48.434363 | orchestrator | Thursday 05 February 2026 02:26:44 +0000 (0:00:00.448) 0:00:20.331 ***** 2026-02-05 02:26:48.434374 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 02:26:48.434396 | orchestrator | 2026-02-05 02:26:48.434407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:48.434418 | orchestrator | Thursday 05 February 2026 02:26:45 +0000 (0:00:00.353) 0:00:20.685 ***** 2026-02-05 02:26:48.434429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-05 02:26:48.434439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-05 02:26:48.434450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-05 02:26:48.434460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-05 02:26:48.434471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-05 02:26:48.434481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-05 02:26:48.434492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-05 02:26:48.434502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-05 02:26:48.434513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-05 02:26:48.434524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-05 02:26:48.434535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-05 02:26:48.434545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-05 02:26:48.434556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-05 02:26:48.434567 | orchestrator | 2026-02-05 02:26:48.434577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:48.434588 | orchestrator | Thursday 05 February 2026 02:26:45 +0000 (0:00:00.399) 0:00:21.085 ***** 2026-02-05 02:26:48.434599 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.434609 | orchestrator | 2026-02-05 02:26:48.434620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:48.434631 | orchestrator | Thursday 05 February 2026 02:26:45 +0000 (0:00:00.202) 0:00:21.287 ***** 2026-02-05 02:26:48.434642 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.434652 | orchestrator | 2026-02-05 02:26:48.434663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:48.434674 | orchestrator | Thursday 05 February 2026 02:26:45 +0000 (0:00:00.197) 0:00:21.484 ***** 2026-02-05 02:26:48.434684 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.434695 | orchestrator | 2026-02-05 02:26:48.434706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:48.434716 | orchestrator | Thursday 05 February 2026 02:26:46 +0000 (0:00:00.199) 0:00:21.684 ***** 2026-02-05 02:26:48.434763 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.434775 | orchestrator | 2026-02-05 02:26:48.434785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:48.434796 | orchestrator | Thursday 05 February 2026 02:26:46 +0000 (0:00:00.198) 0:00:21.883 ***** 2026-02-05 02:26:48.434807 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.434818 | orchestrator | 2026-02-05 02:26:48.434843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:48.434865 | orchestrator | Thursday 05 February 2026 02:26:46 +0000 (0:00:00.204) 0:00:22.087 ***** 2026-02-05 02:26:48.434876 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.434887 | orchestrator | 2026-02-05 02:26:48.434898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:48.434921 | orchestrator | Thursday 05 February 2026 02:26:46 +0000 (0:00:00.203) 0:00:22.291 ***** 2026-02-05 02:26:48.434932 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.434951 | orchestrator | 2026-02-05 02:26:48.434983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:48.434994 | orchestrator | Thursday 05 February 2026 02:26:46 +0000 (0:00:00.208) 0:00:22.499 ***** 2026-02-05 02:26:48.435005 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:48.435029 | orchestrator | 2026-02-05 02:26:48.435050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:48.435061 | orchestrator | Thursday 05 February 2026 02:26:47 +0000 (0:00:00.600) 0:00:23.100 ***** 2026-02-05 02:26:48.435072 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-05 02:26:48.435084 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-05 02:26:48.435095 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-05 02:26:48.435106 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-05 02:26:48.435156 | orchestrator | 2026-02-05 02:26:48.435169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:48.435180 | orchestrator | Thursday 05 February 2026 02:26:48 +0000 (0:00:00.708) 0:00:23.809 ***** 2026-02-05 02:26:48.435191 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.588600 | orchestrator | 2026-02-05 02:26:54.588708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:54.588729 | orchestrator | Thursday 05 February 2026 02:26:48 +0000 (0:00:00.222) 0:00:24.031 ***** 2026-02-05 02:26:54.588744 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.588760 | orchestrator | 2026-02-05 02:26:54.588769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:54.588778 | orchestrator | Thursday 05 February 2026 02:26:48 +0000 (0:00:00.211) 0:00:24.243 ***** 2026-02-05 02:26:54.588801 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.588809 | orchestrator | 2026-02-05 02:26:54.588818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:26:54.588826 | orchestrator | Thursday 05 February 2026 02:26:48 +0000 (0:00:00.223) 0:00:24.467 ***** 2026-02-05 02:26:54.588834 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.588842 | orchestrator | 2026-02-05 02:26:54.588850 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-05 02:26:54.588857 | orchestrator | Thursday 05 February 2026 02:26:49 +0000 (0:00:00.257) 0:00:24.724 ***** 2026-02-05 02:26:54.588865 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-05 02:26:54.588874 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-05 02:26:54.588882 | orchestrator | 2026-02-05 02:26:54.588890 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-05 02:26:54.588899 | orchestrator | Thursday 05 February 2026 02:26:49 +0000 (0:00:00.170) 0:00:24.894 ***** 2026-02-05 02:26:54.588912 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.588926 | orchestrator | 2026-02-05 02:26:54.588939 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-05 02:26:54.588951 | orchestrator | Thursday 05 February 2026 02:26:49 +0000 (0:00:00.141) 0:00:25.035 ***** 2026-02-05 02:26:54.588963 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.588975 | orchestrator | 2026-02-05 02:26:54.588987 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-05 02:26:54.588999 | orchestrator | Thursday 05 February 2026 02:26:49 +0000 (0:00:00.140) 0:00:25.175 ***** 2026-02-05 02:26:54.589012 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.589024 | orchestrator | 2026-02-05 02:26:54.589035 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-05 02:26:54.589047 | orchestrator | Thursday 05 February 2026 02:26:49 +0000 (0:00:00.139) 0:00:25.316 ***** 2026-02-05 02:26:54.589060 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:26:54.589073 | orchestrator | 2026-02-05 02:26:54.589085 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-05 02:26:54.589098 | orchestrator | Thursday 05 February 2026 02:26:49 +0000 (0:00:00.144) 0:00:25.460 ***** 2026-02-05 02:26:54.589200 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '599b5b3c-37df-591b-a248-24d26d466625'}}) 2026-02-05 02:26:54.589219 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}}) 2026-02-05 02:26:54.589233 | orchestrator | 2026-02-05 02:26:54.589247 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-05 02:26:54.589262 | orchestrator | Thursday 05 February 2026 02:26:50 +0000 (0:00:00.172) 0:00:25.633 ***** 2026-02-05 02:26:54.589276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '599b5b3c-37df-591b-a248-24d26d466625'}})  2026-02-05 02:26:54.589317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}})  2026-02-05 02:26:54.589332 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.589345 | orchestrator | 2026-02-05 02:26:54.589358 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-05 02:26:54.589373 | orchestrator | Thursday 05 February 2026 02:26:50 +0000 (0:00:00.386) 0:00:26.019 ***** 2026-02-05 02:26:54.589385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '599b5b3c-37df-591b-a248-24d26d466625'}})  2026-02-05 02:26:54.589399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}})  2026-02-05 02:26:54.589413 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.589426 | orchestrator | 2026-02-05 02:26:54.589439 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-05 02:26:54.589452 | orchestrator | Thursday 05 February 2026 02:26:50 +0000 (0:00:00.201) 0:00:26.220 ***** 2026-02-05 02:26:54.589480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '599b5b3c-37df-591b-a248-24d26d466625'}})  2026-02-05 02:26:54.589494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}})  2026-02-05 02:26:54.589520 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.589534 | orchestrator | 2026-02-05 02:26:54.589548 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-05 02:26:54.589561 | orchestrator | Thursday 05 February 2026 02:26:50 +0000 (0:00:00.158) 0:00:26.379 ***** 2026-02-05 02:26:54.589570 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:26:54.589578 | orchestrator | 2026-02-05 02:26:54.589586 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-05 02:26:54.589597 | orchestrator | Thursday 05 February 2026 02:26:50 +0000 (0:00:00.152) 0:00:26.532 ***** 2026-02-05 02:26:54.589610 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:26:54.589623 | orchestrator | 2026-02-05 02:26:54.589635 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-05 02:26:54.589649 | orchestrator | Thursday 05 February 2026 02:26:51 +0000 (0:00:00.161) 0:00:26.694 ***** 2026-02-05 02:26:54.589686 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.589700 | orchestrator | 2026-02-05 02:26:54.589710 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-05 02:26:54.589719 | orchestrator | Thursday 05 February 2026 02:26:51 +0000 (0:00:00.142) 0:00:26.836 ***** 2026-02-05 02:26:54.589727 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.589735 | orchestrator | 2026-02-05 02:26:54.589743 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-05 02:26:54.589751 | orchestrator | Thursday 05 February 2026 02:26:51 +0000 (0:00:00.149) 0:00:26.985 ***** 2026-02-05 02:26:54.589767 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.589775 | orchestrator | 2026-02-05 02:26:54.589783 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-05 02:26:54.589791 | orchestrator | Thursday 05 February 2026 02:26:51 +0000 (0:00:00.149) 0:00:27.135 ***** 2026-02-05 02:26:54.589812 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 02:26:54.589821 | orchestrator |  "ceph_osd_devices": { 2026-02-05 02:26:54.589829 | orchestrator |  "sdb": { 2026-02-05 02:26:54.589838 | orchestrator |  "osd_lvm_uuid": "599b5b3c-37df-591b-a248-24d26d466625" 2026-02-05 02:26:54.589846 | orchestrator |  }, 2026-02-05 02:26:54.589854 | orchestrator |  "sdc": { 2026-02-05 02:26:54.589863 | orchestrator |  "osd_lvm_uuid": "f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c" 2026-02-05 02:26:54.589871 | orchestrator |  } 2026-02-05 02:26:54.589879 | orchestrator |  } 2026-02-05 02:26:54.589888 | orchestrator | } 2026-02-05 02:26:54.589896 | orchestrator | 2026-02-05 02:26:54.589904 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-05 02:26:54.589913 | orchestrator | Thursday 05 February 2026 02:26:51 +0000 (0:00:00.131) 0:00:27.266 ***** 2026-02-05 02:26:54.589921 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.589929 | orchestrator | 2026-02-05 02:26:54.589937 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-05 02:26:54.589945 | orchestrator | Thursday 05 February 2026 02:26:51 +0000 (0:00:00.147) 0:00:27.414 ***** 2026-02-05 02:26:54.589953 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.589961 | orchestrator | 2026-02-05 02:26:54.589969 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-05 02:26:54.589977 | orchestrator | Thursday 05 February 2026 02:26:51 +0000 (0:00:00.143) 0:00:27.558 ***** 2026-02-05 02:26:54.589985 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:26:54.589993 | orchestrator | 2026-02-05 02:26:54.590001 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-05 02:26:54.590144 | orchestrator | Thursday 05 February 2026 02:26:52 +0000 (0:00:00.143) 0:00:27.701 ***** 2026-02-05 02:26:54.590164 | orchestrator | changed: [testbed-node-4] => { 2026-02-05 02:26:54.590177 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-05 02:26:54.590190 | orchestrator |  "ceph_osd_devices": { 2026-02-05 02:26:54.590204 | orchestrator |  "sdb": { 2026-02-05 02:26:54.590220 | orchestrator |  "osd_lvm_uuid": "599b5b3c-37df-591b-a248-24d26d466625" 2026-02-05 02:26:54.590233 | orchestrator |  }, 2026-02-05 02:26:54.590247 | orchestrator |  "sdc": { 2026-02-05 02:26:54.590260 | orchestrator |  "osd_lvm_uuid": "f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c" 2026-02-05 02:26:54.590274 | orchestrator |  } 2026-02-05 02:26:54.590287 | orchestrator |  }, 2026-02-05 02:26:54.590302 | orchestrator |  "lvm_volumes": [ 2026-02-05 02:26:54.590316 | orchestrator |  { 2026-02-05 02:26:54.590330 | orchestrator |  "data": "osd-block-599b5b3c-37df-591b-a248-24d26d466625", 2026-02-05 02:26:54.590339 | orchestrator |  "data_vg": "ceph-599b5b3c-37df-591b-a248-24d26d466625" 2026-02-05 02:26:54.590347 | orchestrator |  }, 2026-02-05 02:26:54.590355 | orchestrator |  { 2026-02-05 02:26:54.590363 | orchestrator |  "data": "osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c", 2026-02-05 02:26:54.590371 | orchestrator |  "data_vg": "ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c" 2026-02-05 02:26:54.590384 | orchestrator |  } 2026-02-05 02:26:54.590397 | orchestrator |  ] 2026-02-05 02:26:54.590410 | orchestrator |  } 2026-02-05 02:26:54.590424 | orchestrator | } 2026-02-05 02:26:54.590438 | orchestrator | 2026-02-05 02:26:54.590450 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-05 02:26:54.590464 | orchestrator | Thursday 05 February 2026 02:26:52 +0000 (0:00:00.431) 0:00:28.133 ***** 2026-02-05 02:26:54.590473 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-05 02:26:54.590481 | orchestrator | 2026-02-05 02:26:54.590489 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-05 02:26:54.590497 | orchestrator | 2026-02-05 02:26:54.590504 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 02:26:54.590512 | orchestrator | Thursday 05 February 2026 02:26:53 +0000 (0:00:01.151) 0:00:29.284 ***** 2026-02-05 02:26:54.590530 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-05 02:26:54.590538 | orchestrator | 2026-02-05 02:26:54.590546 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 02:26:54.590554 | orchestrator | Thursday 05 February 2026 02:26:53 +0000 (0:00:00.271) 0:00:29.556 ***** 2026-02-05 02:26:54.590562 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:26:54.590570 | orchestrator | 2026-02-05 02:26:54.590578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:26:54.590586 | orchestrator | Thursday 05 February 2026 02:26:54 +0000 (0:00:00.252) 0:00:29.809 ***** 2026-02-05 02:26:54.590594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-05 02:26:54.590602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-05 02:26:54.590610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-05 02:26:54.590618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-05 02:26:54.590626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-05 02:26:54.590645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-05 02:27:03.276890 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-05 02:27:03.276986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-05 02:27:03.276998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-05 02:27:03.277006 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-05 02:27:03.277029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-05 02:27:03.277037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-05 02:27:03.277045 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-05 02:27:03.277052 | orchestrator | 2026-02-05 02:27:03.277061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277069 | orchestrator | Thursday 05 February 2026 02:26:54 +0000 (0:00:00.371) 0:00:30.181 ***** 2026-02-05 02:27:03.277077 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277086 | orchestrator | 2026-02-05 02:27:03.277094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277101 | orchestrator | Thursday 05 February 2026 02:26:54 +0000 (0:00:00.209) 0:00:30.390 ***** 2026-02-05 02:27:03.277108 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277149 | orchestrator | 2026-02-05 02:27:03.277157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277164 | orchestrator | Thursday 05 February 2026 02:26:54 +0000 (0:00:00.195) 0:00:30.586 ***** 2026-02-05 02:27:03.277172 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277179 | orchestrator | 2026-02-05 02:27:03.277186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277194 | orchestrator | Thursday 05 February 2026 02:26:55 +0000 (0:00:00.243) 0:00:30.829 ***** 2026-02-05 02:27:03.277201 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277208 | orchestrator | 2026-02-05 02:27:03.277215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277223 | orchestrator | Thursday 05 February 2026 02:26:55 +0000 (0:00:00.603) 0:00:31.433 ***** 2026-02-05 02:27:03.277230 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277237 | orchestrator | 2026-02-05 02:27:03.277245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277252 | orchestrator | Thursday 05 February 2026 02:26:56 +0000 (0:00:00.221) 0:00:31.655 ***** 2026-02-05 02:27:03.277277 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277343 | orchestrator | 2026-02-05 02:27:03.277352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277359 | orchestrator | Thursday 05 February 2026 02:26:56 +0000 (0:00:00.209) 0:00:31.865 ***** 2026-02-05 02:27:03.277366 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277373 | orchestrator | 2026-02-05 02:27:03.277381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277388 | orchestrator | Thursday 05 February 2026 02:26:56 +0000 (0:00:00.246) 0:00:32.112 ***** 2026-02-05 02:27:03.277395 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277402 | orchestrator | 2026-02-05 02:27:03.277409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277416 | orchestrator | Thursday 05 February 2026 02:26:56 +0000 (0:00:00.214) 0:00:32.327 ***** 2026-02-05 02:27:03.277424 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa) 2026-02-05 02:27:03.277433 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa) 2026-02-05 02:27:03.277440 | orchestrator | 2026-02-05 02:27:03.277449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277457 | orchestrator | Thursday 05 February 2026 02:26:57 +0000 (0:00:00.469) 0:00:32.796 ***** 2026-02-05 02:27:03.277466 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9) 2026-02-05 02:27:03.277476 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9) 2026-02-05 02:27:03.277484 | orchestrator | 2026-02-05 02:27:03.277493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277502 | orchestrator | Thursday 05 February 2026 02:26:57 +0000 (0:00:00.421) 0:00:33.218 ***** 2026-02-05 02:27:03.277511 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49) 2026-02-05 02:27:03.277519 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49) 2026-02-05 02:27:03.277528 | orchestrator | 2026-02-05 02:27:03.277537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277546 | orchestrator | Thursday 05 February 2026 02:26:58 +0000 (0:00:00.447) 0:00:33.665 ***** 2026-02-05 02:27:03.277555 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc) 2026-02-05 02:27:03.277564 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc) 2026-02-05 02:27:03.277573 | orchestrator | 2026-02-05 02:27:03.277582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:27:03.277590 | orchestrator | Thursday 05 February 2026 02:26:58 +0000 (0:00:00.461) 0:00:34.127 ***** 2026-02-05 02:27:03.277599 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 02:27:03.277608 | orchestrator | 2026-02-05 02:27:03.277616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.277639 | orchestrator | Thursday 05 February 2026 02:26:58 +0000 (0:00:00.336) 0:00:34.464 ***** 2026-02-05 02:27:03.277648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-05 02:27:03.277656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-05 02:27:03.277665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-05 02:27:03.277679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-05 02:27:03.277688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-05 02:27:03.277696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-05 02:27:03.277712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-05 02:27:03.277721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-05 02:27:03.277729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-05 02:27:03.277737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-05 02:27:03.277746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-05 02:27:03.277755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-05 02:27:03.277763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-05 02:27:03.277771 | orchestrator | 2026-02-05 02:27:03.277781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.277790 | orchestrator | Thursday 05 February 2026 02:26:59 +0000 (0:00:00.581) 0:00:35.045 ***** 2026-02-05 02:27:03.277799 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277806 | orchestrator | 2026-02-05 02:27:03.277813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.277820 | orchestrator | Thursday 05 February 2026 02:26:59 +0000 (0:00:00.220) 0:00:35.266 ***** 2026-02-05 02:27:03.277828 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277835 | orchestrator | 2026-02-05 02:27:03.277842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.277849 | orchestrator | Thursday 05 February 2026 02:26:59 +0000 (0:00:00.198) 0:00:35.465 ***** 2026-02-05 02:27:03.277856 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277863 | orchestrator | 2026-02-05 02:27:03.277871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.277878 | orchestrator | Thursday 05 February 2026 02:27:00 +0000 (0:00:00.198) 0:00:35.664 ***** 2026-02-05 02:27:03.277885 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277892 | orchestrator | 2026-02-05 02:27:03.277899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.277906 | orchestrator | Thursday 05 February 2026 02:27:00 +0000 (0:00:00.224) 0:00:35.888 ***** 2026-02-05 02:27:03.277913 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277921 | orchestrator | 2026-02-05 02:27:03.277928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.277935 | orchestrator | Thursday 05 February 2026 02:27:00 +0000 (0:00:00.215) 0:00:36.104 ***** 2026-02-05 02:27:03.277942 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277949 | orchestrator | 2026-02-05 02:27:03.277956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.277963 | orchestrator | Thursday 05 February 2026 02:27:00 +0000 (0:00:00.216) 0:00:36.320 ***** 2026-02-05 02:27:03.277971 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.277978 | orchestrator | 2026-02-05 02:27:03.277985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.277992 | orchestrator | Thursday 05 February 2026 02:27:00 +0000 (0:00:00.205) 0:00:36.525 ***** 2026-02-05 02:27:03.277999 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.278006 | orchestrator | 2026-02-05 02:27:03.278063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.278072 | orchestrator | Thursday 05 February 2026 02:27:01 +0000 (0:00:00.200) 0:00:36.726 ***** 2026-02-05 02:27:03.278084 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-05 02:27:03.278096 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-05 02:27:03.278108 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-05 02:27:03.278142 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-05 02:27:03.278160 | orchestrator | 2026-02-05 02:27:03.278181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.278193 | orchestrator | Thursday 05 February 2026 02:27:01 +0000 (0:00:00.868) 0:00:37.595 ***** 2026-02-05 02:27:03.278203 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.278214 | orchestrator | 2026-02-05 02:27:03.278227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.278238 | orchestrator | Thursday 05 February 2026 02:27:02 +0000 (0:00:00.205) 0:00:37.800 ***** 2026-02-05 02:27:03.278250 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.278262 | orchestrator | 2026-02-05 02:27:03.278273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.278285 | orchestrator | Thursday 05 February 2026 02:27:02 +0000 (0:00:00.198) 0:00:37.999 ***** 2026-02-05 02:27:03.278337 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.278354 | orchestrator | 2026-02-05 02:27:03.278367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:27:03.278380 | orchestrator | Thursday 05 February 2026 02:27:03 +0000 (0:00:00.669) 0:00:38.668 ***** 2026-02-05 02:27:03.278389 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:03.278396 | orchestrator | 2026-02-05 02:27:03.278413 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-05 02:27:07.380956 | orchestrator | Thursday 05 February 2026 02:27:03 +0000 (0:00:00.204) 0:00:38.872 ***** 2026-02-05 02:27:07.381032 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-05 02:27:07.381038 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-05 02:27:07.381044 | orchestrator | 2026-02-05 02:27:07.381049 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-05 02:27:07.381067 | orchestrator | Thursday 05 February 2026 02:27:03 +0000 (0:00:00.169) 0:00:39.042 ***** 2026-02-05 02:27:07.381073 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:07.381077 | orchestrator | 2026-02-05 02:27:07.381082 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-05 02:27:07.381087 | orchestrator | Thursday 05 February 2026 02:27:03 +0000 (0:00:00.146) 0:00:39.188 ***** 2026-02-05 02:27:07.381091 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:07.381096 | orchestrator | 2026-02-05 02:27:07.381100 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-05 02:27:07.381104 | orchestrator | Thursday 05 February 2026 02:27:03 +0000 (0:00:00.141) 0:00:39.330 ***** 2026-02-05 02:27:07.381109 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:07.381113 | orchestrator | 2026-02-05 02:27:07.381158 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-05 02:27:07.381163 | orchestrator | Thursday 05 February 2026 02:27:03 +0000 (0:00:00.128) 0:00:39.459 ***** 2026-02-05 02:27:07.381167 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:27:07.381172 | orchestrator | 2026-02-05 02:27:07.381177 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-05 02:27:07.381181 | orchestrator | Thursday 05 February 2026 02:27:03 +0000 (0:00:00.129) 0:00:39.588 ***** 2026-02-05 02:27:07.381186 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '27670a2c-7838-5627-a951-e8a6d97fe4be'}}) 2026-02-05 02:27:07.381191 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '51c61bf5-abad-542f-be8e-c69d5e860565'}}) 2026-02-05 02:27:07.381195 | orchestrator | 2026-02-05 02:27:07.381200 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-05 02:27:07.381204 | orchestrator | Thursday 05 February 2026 02:27:04 +0000 (0:00:00.173) 0:00:39.762 ***** 2026-02-05 02:27:07.381209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '27670a2c-7838-5627-a951-e8a6d97fe4be'}})  2026-02-05 02:27:07.381215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '51c61bf5-abad-542f-be8e-c69d5e860565'}})  2026-02-05 02:27:07.381219 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:07.381239 | orchestrator | 2026-02-05 02:27:07.381243 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-05 02:27:07.381248 | orchestrator | Thursday 05 February 2026 02:27:04 +0000 (0:00:00.151) 0:00:39.914 ***** 2026-02-05 02:27:07.381252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '27670a2c-7838-5627-a951-e8a6d97fe4be'}})  2026-02-05 02:27:07.381256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '51c61bf5-abad-542f-be8e-c69d5e860565'}})  2026-02-05 02:27:07.381261 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:07.381265 | orchestrator | 2026-02-05 02:27:07.381269 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-05 02:27:07.381274 | orchestrator | Thursday 05 February 2026 02:27:04 +0000 (0:00:00.155) 0:00:40.069 ***** 2026-02-05 02:27:07.381278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '27670a2c-7838-5627-a951-e8a6d97fe4be'}})  2026-02-05 02:27:07.381283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '51c61bf5-abad-542f-be8e-c69d5e860565'}})  2026-02-05 02:27:07.381287 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:07.381291 | orchestrator | 2026-02-05 02:27:07.381296 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-05 02:27:07.381300 | orchestrator | Thursday 05 February 2026 02:27:04 +0000 (0:00:00.157) 0:00:40.226 ***** 2026-02-05 02:27:07.381304 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:27:07.381309 | orchestrator | 2026-02-05 02:27:07.381313 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-05 02:27:07.381317 | orchestrator | Thursday 05 February 2026 02:27:04 +0000 (0:00:00.134) 0:00:40.361 ***** 2026-02-05 02:27:07.381322 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:27:07.381326 | orchestrator | 2026-02-05 02:27:07.381331 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-05 02:27:07.381335 | orchestrator | Thursday 05 February 2026 02:27:05 +0000 (0:00:00.344) 0:00:40.705 ***** 2026-02-05 02:27:07.381340 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:07.381344 | orchestrator | 2026-02-05 02:27:07.381349 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-05 02:27:07.381353 | orchestrator | Thursday 05 February 2026 02:27:05 +0000 (0:00:00.147) 0:00:40.853 ***** 2026-02-05 02:27:07.381358 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:07.381363 | orchestrator | 2026-02-05 02:27:07.381367 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-05 02:27:07.381372 | orchestrator | Thursday 05 February 2026 02:27:05 +0000 (0:00:00.144) 0:00:40.997 ***** 2026-02-05 02:27:07.381376 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:07.381381 | orchestrator | 2026-02-05 02:27:07.381385 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-05 02:27:07.381390 | orchestrator | Thursday 05 February 2026 02:27:05 +0000 (0:00:00.144) 0:00:41.141 ***** 2026-02-05 02:27:07.381394 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 02:27:07.381399 | orchestrator |  "ceph_osd_devices": { 2026-02-05 02:27:07.381404 | orchestrator |  "sdb": { 2026-02-05 02:27:07.381420 | orchestrator |  "osd_lvm_uuid": "27670a2c-7838-5627-a951-e8a6d97fe4be" 2026-02-05 02:27:07.381425 | orchestrator |  }, 2026-02-05 02:27:07.381430 | orchestrator |  "sdc": { 2026-02-05 02:27:07.381434 | orchestrator |  "osd_lvm_uuid": "51c61bf5-abad-542f-be8e-c69d5e860565" 2026-02-05 02:27:07.381439 | orchestrator |  } 2026-02-05 02:27:07.381444 | orchestrator |  } 2026-02-05 02:27:07.381449 | orchestrator | } 2026-02-05 02:27:07.381453 | orchestrator | 2026-02-05 02:27:07.381458 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-05 02:27:07.381466 | orchestrator | Thursday 05 February 2026 02:27:05 +0000 (0:00:00.143) 0:00:41.285 ***** 2026-02-05 02:27:07.381470 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:07.381479 | orchestrator | 2026-02-05 02:27:07.381483 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-05 02:27:07.381488 | orchestrator | Thursday 05 February 2026 02:27:05 +0000 (0:00:00.132) 0:00:41.418 ***** 2026-02-05 02:27:07.381492 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:07.381497 | orchestrator | 2026-02-05 02:27:07.381501 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-05 02:27:07.381506 | orchestrator | Thursday 05 February 2026 02:27:05 +0000 (0:00:00.147) 0:00:41.565 ***** 2026-02-05 02:27:07.381510 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:27:07.381515 | orchestrator | 2026-02-05 02:27:07.381519 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-05 02:27:07.381524 | orchestrator | Thursday 05 February 2026 02:27:06 +0000 (0:00:00.140) 0:00:41.705 ***** 2026-02-05 02:27:07.381528 | orchestrator | changed: [testbed-node-5] => { 2026-02-05 02:27:07.381533 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-05 02:27:07.381538 | orchestrator |  "ceph_osd_devices": { 2026-02-05 02:27:07.381542 | orchestrator |  "sdb": { 2026-02-05 02:27:07.381547 | orchestrator |  "osd_lvm_uuid": "27670a2c-7838-5627-a951-e8a6d97fe4be" 2026-02-05 02:27:07.381551 | orchestrator |  }, 2026-02-05 02:27:07.381556 | orchestrator |  "sdc": { 2026-02-05 02:27:07.381560 | orchestrator |  "osd_lvm_uuid": "51c61bf5-abad-542f-be8e-c69d5e860565" 2026-02-05 02:27:07.381565 | orchestrator |  } 2026-02-05 02:27:07.381570 | orchestrator |  }, 2026-02-05 02:27:07.381574 | orchestrator |  "lvm_volumes": [ 2026-02-05 02:27:07.381579 | orchestrator |  { 2026-02-05 02:27:07.381583 | orchestrator |  "data": "osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be", 2026-02-05 02:27:07.381588 | orchestrator |  "data_vg": "ceph-27670a2c-7838-5627-a951-e8a6d97fe4be" 2026-02-05 02:27:07.381592 | orchestrator |  }, 2026-02-05 02:27:07.381597 | orchestrator |  { 2026-02-05 02:27:07.381601 | orchestrator |  "data": "osd-block-51c61bf5-abad-542f-be8e-c69d5e860565", 2026-02-05 02:27:07.381606 | orchestrator |  "data_vg": "ceph-51c61bf5-abad-542f-be8e-c69d5e860565" 2026-02-05 02:27:07.381610 | orchestrator |  } 2026-02-05 02:27:07.381615 | orchestrator |  ] 2026-02-05 02:27:07.381620 | orchestrator |  } 2026-02-05 02:27:07.381624 | orchestrator | } 2026-02-05 02:27:07.381629 | orchestrator | 2026-02-05 02:27:07.381633 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-05 02:27:07.381638 | orchestrator | Thursday 05 February 2026 02:27:06 +0000 (0:00:00.228) 0:00:41.933 ***** 2026-02-05 02:27:07.381642 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-05 02:27:07.381647 | orchestrator | 2026-02-05 02:27:07.381651 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:27:07.381656 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 02:27:07.381662 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 02:27:07.381666 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 02:27:07.381671 | orchestrator | 2026-02-05 02:27:07.381675 | orchestrator | 2026-02-05 02:27:07.381680 | orchestrator | 2026-02-05 02:27:07.381684 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:27:07.381689 | orchestrator | Thursday 05 February 2026 02:27:07 +0000 (0:00:01.027) 0:00:42.961 ***** 2026-02-05 02:27:07.381693 | orchestrator | =============================================================================== 2026-02-05 02:27:07.381698 | orchestrator | Write configuration file ------------------------------------------------ 3.97s 2026-02-05 02:27:07.381702 | orchestrator | Add known links to the list of available block devices ------------------ 1.41s 2026-02-05 02:27:07.381710 | orchestrator | Add known partitions to the list of available block devices ------------- 1.37s 2026-02-05 02:27:07.381715 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2026-02-05 02:27:07.381719 | orchestrator | Print configuration data ------------------------------------------------ 0.88s 2026-02-05 02:27:07.381724 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-02-05 02:27:07.381728 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2026-02-05 02:27:07.381733 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-02-05 02:27:07.381738 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2026-02-05 02:27:07.381742 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-02-05 02:27:07.381747 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.70s 2026-02-05 02:27:07.381751 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-02-05 02:27:07.381756 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-02-05 02:27:07.381763 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-02-05 02:27:07.746212 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-02-05 02:27:07.746317 | orchestrator | Set OSD devices config data --------------------------------------------- 0.66s 2026-02-05 02:27:07.746334 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-02-05 02:27:07.746365 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-02-05 02:27:07.746377 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-02-05 02:27:07.746388 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-02-05 02:27:30.342334 | orchestrator | 2026-02-05 02:27:30 | INFO  | Task 340ae1e1-88d6-4fdd-8d4e-8ab0d46770e9 (sync inventory) is running in background. Output coming soon. 2026-02-05 02:27:57.226690 | orchestrator | 2026-02-05 02:27:31 | INFO  | Starting group_vars file reorganization 2026-02-05 02:27:57.226804 | orchestrator | 2026-02-05 02:27:31 | INFO  | Moved 0 file(s) to their respective directories 2026-02-05 02:27:57.226825 | orchestrator | 2026-02-05 02:27:31 | INFO  | Group_vars file reorganization completed 2026-02-05 02:27:57.226839 | orchestrator | 2026-02-05 02:27:34 | INFO  | Starting variable preparation from inventory 2026-02-05 02:27:57.226854 | orchestrator | 2026-02-05 02:27:37 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-05 02:27:57.226869 | orchestrator | 2026-02-05 02:27:37 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-05 02:27:57.226882 | orchestrator | 2026-02-05 02:27:37 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-05 02:27:57.226896 | orchestrator | 2026-02-05 02:27:37 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-05 02:27:57.226910 | orchestrator | 2026-02-05 02:27:37 | INFO  | Variable preparation completed 2026-02-05 02:27:57.226924 | orchestrator | 2026-02-05 02:27:39 | INFO  | Starting inventory overwrite handling 2026-02-05 02:27:57.226938 | orchestrator | 2026-02-05 02:27:39 | INFO  | Handling group overwrites in 99-overwrite 2026-02-05 02:27:57.226952 | orchestrator | 2026-02-05 02:27:39 | INFO  | Removing group frr:children from 60-generic 2026-02-05 02:27:57.227035 | orchestrator | 2026-02-05 02:27:39 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-05 02:27:57.227047 | orchestrator | 2026-02-05 02:27:39 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-05 02:27:57.227082 | orchestrator | 2026-02-05 02:27:39 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-05 02:27:57.227091 | orchestrator | 2026-02-05 02:27:39 | INFO  | Handling group overwrites in 20-roles 2026-02-05 02:27:57.227099 | orchestrator | 2026-02-05 02:27:39 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-05 02:27:57.227107 | orchestrator | 2026-02-05 02:27:39 | INFO  | Removed 5 group(s) in total 2026-02-05 02:27:57.227180 | orchestrator | 2026-02-05 02:27:39 | INFO  | Inventory overwrite handling completed 2026-02-05 02:27:57.227189 | orchestrator | 2026-02-05 02:27:40 | INFO  | Starting merge of inventory files 2026-02-05 02:27:57.227197 | orchestrator | 2026-02-05 02:27:40 | INFO  | Inventory files merged successfully 2026-02-05 02:27:57.227205 | orchestrator | 2026-02-05 02:27:45 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-05 02:27:57.227213 | orchestrator | 2026-02-05 02:27:55 | INFO  | Successfully wrote ClusterShell configuration 2026-02-05 02:27:57.227259 | orchestrator | [master 6663d7f] 2026-02-05-02-27 2026-02-05 02:27:57.227274 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-05 02:27:59.519517 | orchestrator | 2026-02-05 02:27:59 | INFO  | Task 88bb0ad0-d635-4425-9a63-e06f8879c179 (ceph-create-lvm-devices) was prepared for execution. 2026-02-05 02:27:59.519622 | orchestrator | 2026-02-05 02:27:59 | INFO  | It takes a moment until task 88bb0ad0-d635-4425-9a63-e06f8879c179 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-05 02:28:12.150378 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 02:28:12.150489 | orchestrator | 2.16.14 2026-02-05 02:28:12.150508 | orchestrator | 2026-02-05 02:28:12.150521 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-05 02:28:12.150534 | orchestrator | 2026-02-05 02:28:12.150545 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 02:28:12.150557 | orchestrator | Thursday 05 February 2026 02:28:03 +0000 (0:00:00.303) 0:00:00.303 ***** 2026-02-05 02:28:12.150568 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 02:28:12.150580 | orchestrator | 2026-02-05 02:28:12.150591 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 02:28:12.150602 | orchestrator | Thursday 05 February 2026 02:28:04 +0000 (0:00:00.263) 0:00:00.567 ***** 2026-02-05 02:28:12.150613 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:28:12.150625 | orchestrator | 2026-02-05 02:28:12.150636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.150647 | orchestrator | Thursday 05 February 2026 02:28:04 +0000 (0:00:00.239) 0:00:00.807 ***** 2026-02-05 02:28:12.150658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-05 02:28:12.150669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-05 02:28:12.150697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-05 02:28:12.150708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-05 02:28:12.150720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-05 02:28:12.150730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-05 02:28:12.150741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-05 02:28:12.150752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-05 02:28:12.150763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-05 02:28:12.150774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-05 02:28:12.150811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-05 02:28:12.150822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-05 02:28:12.150833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-05 02:28:12.150844 | orchestrator | 2026-02-05 02:28:12.150855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.150866 | orchestrator | Thursday 05 February 2026 02:28:04 +0000 (0:00:00.530) 0:00:01.337 ***** 2026-02-05 02:28:12.150877 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.150888 | orchestrator | 2026-02-05 02:28:12.150899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.150912 | orchestrator | Thursday 05 February 2026 02:28:05 +0000 (0:00:00.230) 0:00:01.568 ***** 2026-02-05 02:28:12.150925 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.150939 | orchestrator | 2026-02-05 02:28:12.150951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.150964 | orchestrator | Thursday 05 February 2026 02:28:05 +0000 (0:00:00.230) 0:00:01.798 ***** 2026-02-05 02:28:12.150976 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.150988 | orchestrator | 2026-02-05 02:28:12.151001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.151013 | orchestrator | Thursday 05 February 2026 02:28:05 +0000 (0:00:00.282) 0:00:02.081 ***** 2026-02-05 02:28:12.151026 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.151039 | orchestrator | 2026-02-05 02:28:12.151051 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.151064 | orchestrator | Thursday 05 February 2026 02:28:05 +0000 (0:00:00.207) 0:00:02.288 ***** 2026-02-05 02:28:12.151076 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.151090 | orchestrator | 2026-02-05 02:28:12.151102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.151115 | orchestrator | Thursday 05 February 2026 02:28:06 +0000 (0:00:00.213) 0:00:02.501 ***** 2026-02-05 02:28:12.151128 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.151166 | orchestrator | 2026-02-05 02:28:12.151180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.151195 | orchestrator | Thursday 05 February 2026 02:28:06 +0000 (0:00:00.218) 0:00:02.719 ***** 2026-02-05 02:28:12.151206 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.151217 | orchestrator | 2026-02-05 02:28:12.151228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.151239 | orchestrator | Thursday 05 February 2026 02:28:06 +0000 (0:00:00.217) 0:00:02.937 ***** 2026-02-05 02:28:12.151250 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.151261 | orchestrator | 2026-02-05 02:28:12.151272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.151283 | orchestrator | Thursday 05 February 2026 02:28:06 +0000 (0:00:00.200) 0:00:03.137 ***** 2026-02-05 02:28:12.151294 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97) 2026-02-05 02:28:12.151307 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97) 2026-02-05 02:28:12.151317 | orchestrator | 2026-02-05 02:28:12.151329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.151356 | orchestrator | Thursday 05 February 2026 02:28:07 +0000 (0:00:00.650) 0:00:03.787 ***** 2026-02-05 02:28:12.151368 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2) 2026-02-05 02:28:12.151379 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2) 2026-02-05 02:28:12.151390 | orchestrator | 2026-02-05 02:28:12.151401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.151435 | orchestrator | Thursday 05 February 2026 02:28:08 +0000 (0:00:00.673) 0:00:04.461 ***** 2026-02-05 02:28:12.151457 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b) 2026-02-05 02:28:12.151469 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b) 2026-02-05 02:28:12.151480 | orchestrator | 2026-02-05 02:28:12.151491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.151502 | orchestrator | Thursday 05 February 2026 02:28:08 +0000 (0:00:00.927) 0:00:05.388 ***** 2026-02-05 02:28:12.151513 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff) 2026-02-05 02:28:12.151524 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff) 2026-02-05 02:28:12.151535 | orchestrator | 2026-02-05 02:28:12.151552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:12.151564 | orchestrator | Thursday 05 February 2026 02:28:09 +0000 (0:00:00.447) 0:00:05.836 ***** 2026-02-05 02:28:12.151575 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 02:28:12.151585 | orchestrator | 2026-02-05 02:28:12.151596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:12.151607 | orchestrator | Thursday 05 February 2026 02:28:09 +0000 (0:00:00.373) 0:00:06.210 ***** 2026-02-05 02:28:12.151618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-05 02:28:12.151629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-05 02:28:12.151640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-05 02:28:12.151650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-05 02:28:12.151661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-05 02:28:12.151672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-05 02:28:12.151683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-05 02:28:12.151693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-05 02:28:12.151704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-05 02:28:12.151715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-05 02:28:12.151726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-05 02:28:12.151737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-05 02:28:12.151747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-05 02:28:12.151758 | orchestrator | 2026-02-05 02:28:12.151769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:12.151779 | orchestrator | Thursday 05 February 2026 02:28:10 +0000 (0:00:00.439) 0:00:06.650 ***** 2026-02-05 02:28:12.151790 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.151801 | orchestrator | 2026-02-05 02:28:12.151812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:12.151823 | orchestrator | Thursday 05 February 2026 02:28:10 +0000 (0:00:00.222) 0:00:06.872 ***** 2026-02-05 02:28:12.151834 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.151845 | orchestrator | 2026-02-05 02:28:12.151855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:12.151866 | orchestrator | Thursday 05 February 2026 02:28:10 +0000 (0:00:00.234) 0:00:07.106 ***** 2026-02-05 02:28:12.151877 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.151895 | orchestrator | 2026-02-05 02:28:12.151906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:12.151917 | orchestrator | Thursday 05 February 2026 02:28:10 +0000 (0:00:00.213) 0:00:07.319 ***** 2026-02-05 02:28:12.151928 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.151938 | orchestrator | 2026-02-05 02:28:12.151949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:12.151960 | orchestrator | Thursday 05 February 2026 02:28:11 +0000 (0:00:00.216) 0:00:07.535 ***** 2026-02-05 02:28:12.151971 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.151982 | orchestrator | 2026-02-05 02:28:12.151993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:12.152004 | orchestrator | Thursday 05 February 2026 02:28:11 +0000 (0:00:00.223) 0:00:07.759 ***** 2026-02-05 02:28:12.152014 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.152025 | orchestrator | 2026-02-05 02:28:12.152036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:12.152047 | orchestrator | Thursday 05 February 2026 02:28:11 +0000 (0:00:00.594) 0:00:08.354 ***** 2026-02-05 02:28:12.152058 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:12.152068 | orchestrator | 2026-02-05 02:28:12.152085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:20.323503 | orchestrator | Thursday 05 February 2026 02:28:12 +0000 (0:00:00.213) 0:00:08.568 ***** 2026-02-05 02:28:20.323594 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.323605 | orchestrator | 2026-02-05 02:28:20.323613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:20.323620 | orchestrator | Thursday 05 February 2026 02:28:12 +0000 (0:00:00.224) 0:00:08.792 ***** 2026-02-05 02:28:20.323628 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-05 02:28:20.323635 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-05 02:28:20.323642 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-05 02:28:20.323649 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-05 02:28:20.323655 | orchestrator | 2026-02-05 02:28:20.323662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:20.323669 | orchestrator | Thursday 05 February 2026 02:28:13 +0000 (0:00:00.683) 0:00:09.476 ***** 2026-02-05 02:28:20.323675 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.323682 | orchestrator | 2026-02-05 02:28:20.323689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:20.323695 | orchestrator | Thursday 05 February 2026 02:28:13 +0000 (0:00:00.213) 0:00:09.689 ***** 2026-02-05 02:28:20.323702 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.323708 | orchestrator | 2026-02-05 02:28:20.323728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:20.323734 | orchestrator | Thursday 05 February 2026 02:28:13 +0000 (0:00:00.228) 0:00:09.918 ***** 2026-02-05 02:28:20.323741 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.323748 | orchestrator | 2026-02-05 02:28:20.323754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:20.323761 | orchestrator | Thursday 05 February 2026 02:28:13 +0000 (0:00:00.222) 0:00:10.141 ***** 2026-02-05 02:28:20.323768 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.323774 | orchestrator | 2026-02-05 02:28:20.323781 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-05 02:28:20.323788 | orchestrator | Thursday 05 February 2026 02:28:13 +0000 (0:00:00.202) 0:00:10.343 ***** 2026-02-05 02:28:20.323795 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.323801 | orchestrator | 2026-02-05 02:28:20.323808 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-05 02:28:20.323814 | orchestrator | Thursday 05 February 2026 02:28:14 +0000 (0:00:00.147) 0:00:10.490 ***** 2026-02-05 02:28:20.323822 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de37fca4-ea41-596c-ab1a-50038d0e278e'}}) 2026-02-05 02:28:20.323846 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '825a1c54-3e62-51fa-b7a4-9af3e8833567'}}) 2026-02-05 02:28:20.323854 | orchestrator | 2026-02-05 02:28:20.323865 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-05 02:28:20.323877 | orchestrator | Thursday 05 February 2026 02:28:14 +0000 (0:00:00.210) 0:00:10.701 ***** 2026-02-05 02:28:20.323889 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'}) 2026-02-05 02:28:20.323901 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'}) 2026-02-05 02:28:20.323911 | orchestrator | 2026-02-05 02:28:20.323922 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-05 02:28:20.323932 | orchestrator | Thursday 05 February 2026 02:28:16 +0000 (0:00:02.084) 0:00:12.786 ***** 2026-02-05 02:28:20.323943 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:20.323956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:20.323967 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.323977 | orchestrator | 2026-02-05 02:28:20.323988 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-05 02:28:20.323999 | orchestrator | Thursday 05 February 2026 02:28:16 +0000 (0:00:00.325) 0:00:13.111 ***** 2026-02-05 02:28:20.324010 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'}) 2026-02-05 02:28:20.324021 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'}) 2026-02-05 02:28:20.324028 | orchestrator | 2026-02-05 02:28:20.324035 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-05 02:28:20.324041 | orchestrator | Thursday 05 February 2026 02:28:18 +0000 (0:00:01.559) 0:00:14.671 ***** 2026-02-05 02:28:20.324048 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:20.324055 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:20.324063 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.324070 | orchestrator | 2026-02-05 02:28:20.324078 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-05 02:28:20.324086 | orchestrator | Thursday 05 February 2026 02:28:18 +0000 (0:00:00.158) 0:00:14.830 ***** 2026-02-05 02:28:20.324108 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.324116 | orchestrator | 2026-02-05 02:28:20.324124 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-05 02:28:20.324132 | orchestrator | Thursday 05 February 2026 02:28:18 +0000 (0:00:00.159) 0:00:14.989 ***** 2026-02-05 02:28:20.324139 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:20.324169 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:20.324177 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.324185 | orchestrator | 2026-02-05 02:28:20.324193 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-05 02:28:20.324200 | orchestrator | Thursday 05 February 2026 02:28:18 +0000 (0:00:00.165) 0:00:15.155 ***** 2026-02-05 02:28:20.324216 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.324224 | orchestrator | 2026-02-05 02:28:20.324231 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-05 02:28:20.324239 | orchestrator | Thursday 05 February 2026 02:28:18 +0000 (0:00:00.134) 0:00:15.289 ***** 2026-02-05 02:28:20.324252 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:20.324261 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:20.324269 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.324276 | orchestrator | 2026-02-05 02:28:20.324284 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-05 02:28:20.324292 | orchestrator | Thursday 05 February 2026 02:28:19 +0000 (0:00:00.163) 0:00:15.453 ***** 2026-02-05 02:28:20.324300 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.324307 | orchestrator | 2026-02-05 02:28:20.324315 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-05 02:28:20.324323 | orchestrator | Thursday 05 February 2026 02:28:19 +0000 (0:00:00.144) 0:00:15.597 ***** 2026-02-05 02:28:20.324331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:20.324339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:20.324348 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.324355 | orchestrator | 2026-02-05 02:28:20.324363 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-05 02:28:20.324371 | orchestrator | Thursday 05 February 2026 02:28:19 +0000 (0:00:00.164) 0:00:15.762 ***** 2026-02-05 02:28:20.324379 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:28:20.324387 | orchestrator | 2026-02-05 02:28:20.324395 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-05 02:28:20.324403 | orchestrator | Thursday 05 February 2026 02:28:19 +0000 (0:00:00.140) 0:00:15.902 ***** 2026-02-05 02:28:20.324409 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:20.324416 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:20.324425 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.324437 | orchestrator | 2026-02-05 02:28:20.324447 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-05 02:28:20.324458 | orchestrator | Thursday 05 February 2026 02:28:19 +0000 (0:00:00.158) 0:00:16.060 ***** 2026-02-05 02:28:20.324469 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:20.324479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:20.324489 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.324499 | orchestrator | 2026-02-05 02:28:20.324510 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-05 02:28:20.324521 | orchestrator | Thursday 05 February 2026 02:28:20 +0000 (0:00:00.375) 0:00:16.436 ***** 2026-02-05 02:28:20.324532 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:20.324544 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:20.324559 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.324567 | orchestrator | 2026-02-05 02:28:20.324573 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-05 02:28:20.324580 | orchestrator | Thursday 05 February 2026 02:28:20 +0000 (0:00:00.169) 0:00:16.606 ***** 2026-02-05 02:28:20.324586 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:20.324593 | orchestrator | 2026-02-05 02:28:20.324600 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-05 02:28:20.324612 | orchestrator | Thursday 05 February 2026 02:28:20 +0000 (0:00:00.140) 0:00:16.746 ***** 2026-02-05 02:28:26.877907 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.878066 | orchestrator | 2026-02-05 02:28:26.878084 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-05 02:28:26.878096 | orchestrator | Thursday 05 February 2026 02:28:20 +0000 (0:00:00.204) 0:00:16.950 ***** 2026-02-05 02:28:26.878106 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.878117 | orchestrator | 2026-02-05 02:28:26.878128 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-05 02:28:26.878138 | orchestrator | Thursday 05 February 2026 02:28:20 +0000 (0:00:00.145) 0:00:17.095 ***** 2026-02-05 02:28:26.878197 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 02:28:26.878210 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-05 02:28:26.878220 | orchestrator | } 2026-02-05 02:28:26.878231 | orchestrator | 2026-02-05 02:28:26.878241 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-05 02:28:26.878278 | orchestrator | Thursday 05 February 2026 02:28:20 +0000 (0:00:00.148) 0:00:17.244 ***** 2026-02-05 02:28:26.878289 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 02:28:26.878300 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-05 02:28:26.878310 | orchestrator | } 2026-02-05 02:28:26.878319 | orchestrator | 2026-02-05 02:28:26.878329 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-05 02:28:26.878356 | orchestrator | Thursday 05 February 2026 02:28:20 +0000 (0:00:00.151) 0:00:17.395 ***** 2026-02-05 02:28:26.878366 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 02:28:26.878376 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-05 02:28:26.878386 | orchestrator | } 2026-02-05 02:28:26.878396 | orchestrator | 2026-02-05 02:28:26.878405 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-05 02:28:26.878415 | orchestrator | Thursday 05 February 2026 02:28:21 +0000 (0:00:00.157) 0:00:17.553 ***** 2026-02-05 02:28:26.878425 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:28:26.878436 | orchestrator | 2026-02-05 02:28:26.878448 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-05 02:28:26.878459 | orchestrator | Thursday 05 February 2026 02:28:21 +0000 (0:00:00.676) 0:00:18.230 ***** 2026-02-05 02:28:26.878470 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:28:26.878482 | orchestrator | 2026-02-05 02:28:26.878493 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-05 02:28:26.878504 | orchestrator | Thursday 05 February 2026 02:28:22 +0000 (0:00:00.561) 0:00:18.791 ***** 2026-02-05 02:28:26.878515 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:28:26.878526 | orchestrator | 2026-02-05 02:28:26.878537 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-05 02:28:26.878548 | orchestrator | Thursday 05 February 2026 02:28:22 +0000 (0:00:00.522) 0:00:19.314 ***** 2026-02-05 02:28:26.878560 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:28:26.878571 | orchestrator | 2026-02-05 02:28:26.878582 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-05 02:28:26.878593 | orchestrator | Thursday 05 February 2026 02:28:23 +0000 (0:00:00.365) 0:00:19.679 ***** 2026-02-05 02:28:26.878603 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.878615 | orchestrator | 2026-02-05 02:28:26.878626 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-05 02:28:26.878658 | orchestrator | Thursday 05 February 2026 02:28:23 +0000 (0:00:00.125) 0:00:19.804 ***** 2026-02-05 02:28:26.878670 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.878682 | orchestrator | 2026-02-05 02:28:26.878693 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-05 02:28:26.878704 | orchestrator | Thursday 05 February 2026 02:28:23 +0000 (0:00:00.131) 0:00:19.936 ***** 2026-02-05 02:28:26.878715 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 02:28:26.878727 | orchestrator |  "vgs_report": { 2026-02-05 02:28:26.878739 | orchestrator |  "vg": [] 2026-02-05 02:28:26.878749 | orchestrator |  } 2026-02-05 02:28:26.878759 | orchestrator | } 2026-02-05 02:28:26.878769 | orchestrator | 2026-02-05 02:28:26.878779 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-05 02:28:26.878789 | orchestrator | Thursday 05 February 2026 02:28:23 +0000 (0:00:00.150) 0:00:20.087 ***** 2026-02-05 02:28:26.878798 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.878815 | orchestrator | 2026-02-05 02:28:26.878831 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-05 02:28:26.878847 | orchestrator | Thursday 05 February 2026 02:28:23 +0000 (0:00:00.139) 0:00:20.227 ***** 2026-02-05 02:28:26.878864 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.878880 | orchestrator | 2026-02-05 02:28:26.878894 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-05 02:28:26.878909 | orchestrator | Thursday 05 February 2026 02:28:23 +0000 (0:00:00.139) 0:00:20.366 ***** 2026-02-05 02:28:26.878924 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.878940 | orchestrator | 2026-02-05 02:28:26.878956 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-05 02:28:26.878973 | orchestrator | Thursday 05 February 2026 02:28:24 +0000 (0:00:00.153) 0:00:20.520 ***** 2026-02-05 02:28:26.878990 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879007 | orchestrator | 2026-02-05 02:28:26.879023 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-05 02:28:26.879040 | orchestrator | Thursday 05 February 2026 02:28:24 +0000 (0:00:00.137) 0:00:20.658 ***** 2026-02-05 02:28:26.879058 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879074 | orchestrator | 2026-02-05 02:28:26.879090 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-05 02:28:26.879107 | orchestrator | Thursday 05 February 2026 02:28:24 +0000 (0:00:00.133) 0:00:20.792 ***** 2026-02-05 02:28:26.879121 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879136 | orchestrator | 2026-02-05 02:28:26.879174 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-05 02:28:26.879189 | orchestrator | Thursday 05 February 2026 02:28:24 +0000 (0:00:00.149) 0:00:20.941 ***** 2026-02-05 02:28:26.879203 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879218 | orchestrator | 2026-02-05 02:28:26.879234 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-05 02:28:26.879250 | orchestrator | Thursday 05 February 2026 02:28:24 +0000 (0:00:00.144) 0:00:21.086 ***** 2026-02-05 02:28:26.879289 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879307 | orchestrator | 2026-02-05 02:28:26.879324 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-05 02:28:26.879342 | orchestrator | Thursday 05 February 2026 02:28:24 +0000 (0:00:00.320) 0:00:21.406 ***** 2026-02-05 02:28:26.879358 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879372 | orchestrator | 2026-02-05 02:28:26.879382 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-05 02:28:26.879392 | orchestrator | Thursday 05 February 2026 02:28:25 +0000 (0:00:00.137) 0:00:21.543 ***** 2026-02-05 02:28:26.879403 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879420 | orchestrator | 2026-02-05 02:28:26.879436 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-05 02:28:26.879452 | orchestrator | Thursday 05 February 2026 02:28:25 +0000 (0:00:00.143) 0:00:21.687 ***** 2026-02-05 02:28:26.879484 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879501 | orchestrator | 2026-02-05 02:28:26.879518 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-05 02:28:26.879534 | orchestrator | Thursday 05 February 2026 02:28:25 +0000 (0:00:00.147) 0:00:21.835 ***** 2026-02-05 02:28:26.879551 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879568 | orchestrator | 2026-02-05 02:28:26.879593 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-05 02:28:26.879611 | orchestrator | Thursday 05 February 2026 02:28:25 +0000 (0:00:00.191) 0:00:22.026 ***** 2026-02-05 02:28:26.879628 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879643 | orchestrator | 2026-02-05 02:28:26.879659 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-05 02:28:26.879676 | orchestrator | Thursday 05 February 2026 02:28:25 +0000 (0:00:00.142) 0:00:22.169 ***** 2026-02-05 02:28:26.879692 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879709 | orchestrator | 2026-02-05 02:28:26.879726 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-05 02:28:26.879743 | orchestrator | Thursday 05 February 2026 02:28:25 +0000 (0:00:00.131) 0:00:22.300 ***** 2026-02-05 02:28:26.879760 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:26.879779 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:26.879796 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879812 | orchestrator | 2026-02-05 02:28:26.879828 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-05 02:28:26.879844 | orchestrator | Thursday 05 February 2026 02:28:26 +0000 (0:00:00.136) 0:00:22.436 ***** 2026-02-05 02:28:26.879862 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:26.879879 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:26.879895 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.879911 | orchestrator | 2026-02-05 02:28:26.879927 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-05 02:28:26.879943 | orchestrator | Thursday 05 February 2026 02:28:26 +0000 (0:00:00.146) 0:00:22.582 ***** 2026-02-05 02:28:26.879958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:26.879974 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:26.879991 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.880007 | orchestrator | 2026-02-05 02:28:26.880023 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-05 02:28:26.880040 | orchestrator | Thursday 05 February 2026 02:28:26 +0000 (0:00:00.144) 0:00:22.726 ***** 2026-02-05 02:28:26.880056 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:26.880073 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:26.880090 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.880107 | orchestrator | 2026-02-05 02:28:26.880123 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-05 02:28:26.880138 | orchestrator | Thursday 05 February 2026 02:28:26 +0000 (0:00:00.144) 0:00:22.871 ***** 2026-02-05 02:28:26.880192 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:26.880209 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:26.880224 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:26.880240 | orchestrator | 2026-02-05 02:28:26.880255 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-05 02:28:26.880270 | orchestrator | Thursday 05 February 2026 02:28:26 +0000 (0:00:00.285) 0:00:23.156 ***** 2026-02-05 02:28:26.880298 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:31.806778 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:31.806890 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:31.806906 | orchestrator | 2026-02-05 02:28:31.806919 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-05 02:28:31.806931 | orchestrator | Thursday 05 February 2026 02:28:26 +0000 (0:00:00.147) 0:00:23.303 ***** 2026-02-05 02:28:31.806942 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:31.806954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:31.806964 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:31.806975 | orchestrator | 2026-02-05 02:28:31.807004 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-05 02:28:31.807015 | orchestrator | Thursday 05 February 2026 02:28:27 +0000 (0:00:00.149) 0:00:23.453 ***** 2026-02-05 02:28:31.807026 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:31.807037 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:31.807048 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:31.807059 | orchestrator | 2026-02-05 02:28:31.807070 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-05 02:28:31.807081 | orchestrator | Thursday 05 February 2026 02:28:27 +0000 (0:00:00.150) 0:00:23.603 ***** 2026-02-05 02:28:31.807091 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:28:31.807103 | orchestrator | 2026-02-05 02:28:31.807114 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-05 02:28:31.807124 | orchestrator | Thursday 05 February 2026 02:28:27 +0000 (0:00:00.529) 0:00:24.133 ***** 2026-02-05 02:28:31.807135 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:28:31.807145 | orchestrator | 2026-02-05 02:28:31.807188 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-05 02:28:31.807200 | orchestrator | Thursday 05 February 2026 02:28:28 +0000 (0:00:00.528) 0:00:24.662 ***** 2026-02-05 02:28:31.807211 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:28:31.807221 | orchestrator | 2026-02-05 02:28:31.807232 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-05 02:28:31.807244 | orchestrator | Thursday 05 February 2026 02:28:28 +0000 (0:00:00.155) 0:00:24.817 ***** 2026-02-05 02:28:31.807255 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'vg_name': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'}) 2026-02-05 02:28:31.807267 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'vg_name': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'}) 2026-02-05 02:28:31.807302 | orchestrator | 2026-02-05 02:28:31.807314 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-05 02:28:31.807327 | orchestrator | Thursday 05 February 2026 02:28:28 +0000 (0:00:00.171) 0:00:24.989 ***** 2026-02-05 02:28:31.807341 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:31.807355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:31.807370 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:31.807390 | orchestrator | 2026-02-05 02:28:31.807408 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-05 02:28:31.807440 | orchestrator | Thursday 05 February 2026 02:28:28 +0000 (0:00:00.168) 0:00:25.157 ***** 2026-02-05 02:28:31.807459 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:31.807478 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:31.807496 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:31.807514 | orchestrator | 2026-02-05 02:28:31.807532 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-05 02:28:31.807551 | orchestrator | Thursday 05 February 2026 02:28:28 +0000 (0:00:00.172) 0:00:25.329 ***** 2026-02-05 02:28:31.807569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 02:28:31.807587 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 02:28:31.807606 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:28:31.807624 | orchestrator | 2026-02-05 02:28:31.807642 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-05 02:28:31.807661 | orchestrator | Thursday 05 February 2026 02:28:29 +0000 (0:00:00.157) 0:00:25.487 ***** 2026-02-05 02:28:31.807704 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 02:28:31.807724 | orchestrator |  "lvm_report": { 2026-02-05 02:28:31.807741 | orchestrator |  "lv": [ 2026-02-05 02:28:31.807752 | orchestrator |  { 2026-02-05 02:28:31.807763 | orchestrator |  "lv_name": "osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567", 2026-02-05 02:28:31.807775 | orchestrator |  "vg_name": "ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567" 2026-02-05 02:28:31.807786 | orchestrator |  }, 2026-02-05 02:28:31.807797 | orchestrator |  { 2026-02-05 02:28:31.807808 | orchestrator |  "lv_name": "osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e", 2026-02-05 02:28:31.807819 | orchestrator |  "vg_name": "ceph-de37fca4-ea41-596c-ab1a-50038d0e278e" 2026-02-05 02:28:31.807830 | orchestrator |  } 2026-02-05 02:28:31.807841 | orchestrator |  ], 2026-02-05 02:28:31.807852 | orchestrator |  "pv": [ 2026-02-05 02:28:31.807863 | orchestrator |  { 2026-02-05 02:28:31.807874 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-05 02:28:31.807885 | orchestrator |  "vg_name": "ceph-de37fca4-ea41-596c-ab1a-50038d0e278e" 2026-02-05 02:28:31.807896 | orchestrator |  }, 2026-02-05 02:28:31.807907 | orchestrator |  { 2026-02-05 02:28:31.807926 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-05 02:28:31.807938 | orchestrator |  "vg_name": "ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567" 2026-02-05 02:28:31.807949 | orchestrator |  } 2026-02-05 02:28:31.807959 | orchestrator |  ] 2026-02-05 02:28:31.807970 | orchestrator |  } 2026-02-05 02:28:31.807982 | orchestrator | } 2026-02-05 02:28:31.808004 | orchestrator | 2026-02-05 02:28:31.808016 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-05 02:28:31.808026 | orchestrator | 2026-02-05 02:28:31.808037 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 02:28:31.808049 | orchestrator | Thursday 05 February 2026 02:28:29 +0000 (0:00:00.433) 0:00:25.920 ***** 2026-02-05 02:28:31.808060 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-05 02:28:31.808071 | orchestrator | 2026-02-05 02:28:31.808081 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 02:28:31.808092 | orchestrator | Thursday 05 February 2026 02:28:29 +0000 (0:00:00.262) 0:00:26.182 ***** 2026-02-05 02:28:31.808103 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:28:31.808114 | orchestrator | 2026-02-05 02:28:31.808125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:31.808136 | orchestrator | Thursday 05 February 2026 02:28:29 +0000 (0:00:00.206) 0:00:26.389 ***** 2026-02-05 02:28:31.808208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-05 02:28:31.808237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-05 02:28:31.808255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-05 02:28:31.808272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-05 02:28:31.808289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-05 02:28:31.808307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-05 02:28:31.808325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-05 02:28:31.808345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-05 02:28:31.808364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-05 02:28:31.808382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-05 02:28:31.808401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-05 02:28:31.808412 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-05 02:28:31.808423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-05 02:28:31.808434 | orchestrator | 2026-02-05 02:28:31.808445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:31.808456 | orchestrator | Thursday 05 February 2026 02:28:30 +0000 (0:00:00.393) 0:00:26.782 ***** 2026-02-05 02:28:31.808467 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:31.808478 | orchestrator | 2026-02-05 02:28:31.808489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:31.808505 | orchestrator | Thursday 05 February 2026 02:28:30 +0000 (0:00:00.198) 0:00:26.981 ***** 2026-02-05 02:28:31.808523 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:31.808541 | orchestrator | 2026-02-05 02:28:31.808559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:31.808576 | orchestrator | Thursday 05 February 2026 02:28:30 +0000 (0:00:00.207) 0:00:27.188 ***** 2026-02-05 02:28:31.808594 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:31.808611 | orchestrator | 2026-02-05 02:28:31.808628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:31.808645 | orchestrator | Thursday 05 February 2026 02:28:30 +0000 (0:00:00.200) 0:00:27.389 ***** 2026-02-05 02:28:31.808663 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:31.808681 | orchestrator | 2026-02-05 02:28:31.808700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:31.808720 | orchestrator | Thursday 05 February 2026 02:28:31 +0000 (0:00:00.186) 0:00:27.576 ***** 2026-02-05 02:28:31.808751 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:31.808762 | orchestrator | 2026-02-05 02:28:31.808773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:31.808784 | orchestrator | Thursday 05 February 2026 02:28:31 +0000 (0:00:00.189) 0:00:27.765 ***** 2026-02-05 02:28:31.808795 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:31.808805 | orchestrator | 2026-02-05 02:28:31.808828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:42.288841 | orchestrator | Thursday 05 February 2026 02:28:31 +0000 (0:00:00.466) 0:00:28.231 ***** 2026-02-05 02:28:42.288934 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.288945 | orchestrator | 2026-02-05 02:28:42.288953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:42.288960 | orchestrator | Thursday 05 February 2026 02:28:31 +0000 (0:00:00.197) 0:00:28.429 ***** 2026-02-05 02:28:42.288967 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.288974 | orchestrator | 2026-02-05 02:28:42.288981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:42.288988 | orchestrator | Thursday 05 February 2026 02:28:32 +0000 (0:00:00.203) 0:00:28.633 ***** 2026-02-05 02:28:42.288996 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde) 2026-02-05 02:28:42.289004 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde) 2026-02-05 02:28:42.289011 | orchestrator | 2026-02-05 02:28:42.289032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:42.289039 | orchestrator | Thursday 05 February 2026 02:28:32 +0000 (0:00:00.428) 0:00:29.061 ***** 2026-02-05 02:28:42.289046 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a) 2026-02-05 02:28:42.289052 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a) 2026-02-05 02:28:42.289060 | orchestrator | 2026-02-05 02:28:42.289066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:42.289073 | orchestrator | Thursday 05 February 2026 02:28:33 +0000 (0:00:00.413) 0:00:29.474 ***** 2026-02-05 02:28:42.289080 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930) 2026-02-05 02:28:42.289087 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930) 2026-02-05 02:28:42.289094 | orchestrator | 2026-02-05 02:28:42.289101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:42.289108 | orchestrator | Thursday 05 February 2026 02:28:33 +0000 (0:00:00.437) 0:00:29.912 ***** 2026-02-05 02:28:42.289115 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a) 2026-02-05 02:28:42.289121 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a) 2026-02-05 02:28:42.289128 | orchestrator | 2026-02-05 02:28:42.289135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:42.289142 | orchestrator | Thursday 05 February 2026 02:28:33 +0000 (0:00:00.462) 0:00:30.375 ***** 2026-02-05 02:28:42.289149 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 02:28:42.289176 | orchestrator | 2026-02-05 02:28:42.289182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289190 | orchestrator | Thursday 05 February 2026 02:28:34 +0000 (0:00:00.332) 0:00:30.708 ***** 2026-02-05 02:28:42.289196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-05 02:28:42.289203 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-05 02:28:42.289210 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-05 02:28:42.289236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-05 02:28:42.289244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-05 02:28:42.289250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-05 02:28:42.289256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-05 02:28:42.289263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-05 02:28:42.289269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-05 02:28:42.289275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-05 02:28:42.289282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-05 02:28:42.289289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-05 02:28:42.289296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-05 02:28:42.289302 | orchestrator | 2026-02-05 02:28:42.289309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289315 | orchestrator | Thursday 05 February 2026 02:28:34 +0000 (0:00:00.413) 0:00:31.121 ***** 2026-02-05 02:28:42.289322 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289328 | orchestrator | 2026-02-05 02:28:42.289335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289342 | orchestrator | Thursday 05 February 2026 02:28:34 +0000 (0:00:00.217) 0:00:31.339 ***** 2026-02-05 02:28:42.289348 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289355 | orchestrator | 2026-02-05 02:28:42.289362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289369 | orchestrator | Thursday 05 February 2026 02:28:35 +0000 (0:00:00.200) 0:00:31.540 ***** 2026-02-05 02:28:42.289375 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289382 | orchestrator | 2026-02-05 02:28:42.289403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289411 | orchestrator | Thursday 05 February 2026 02:28:35 +0000 (0:00:00.640) 0:00:32.181 ***** 2026-02-05 02:28:42.289419 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289426 | orchestrator | 2026-02-05 02:28:42.289433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289441 | orchestrator | Thursday 05 February 2026 02:28:35 +0000 (0:00:00.213) 0:00:32.394 ***** 2026-02-05 02:28:42.289448 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289456 | orchestrator | 2026-02-05 02:28:42.289463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289471 | orchestrator | Thursday 05 February 2026 02:28:36 +0000 (0:00:00.206) 0:00:32.601 ***** 2026-02-05 02:28:42.289479 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289486 | orchestrator | 2026-02-05 02:28:42.289493 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289501 | orchestrator | Thursday 05 February 2026 02:28:36 +0000 (0:00:00.208) 0:00:32.809 ***** 2026-02-05 02:28:42.289513 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289521 | orchestrator | 2026-02-05 02:28:42.289528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289535 | orchestrator | Thursday 05 February 2026 02:28:36 +0000 (0:00:00.205) 0:00:33.015 ***** 2026-02-05 02:28:42.289543 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289550 | orchestrator | 2026-02-05 02:28:42.289558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289565 | orchestrator | Thursday 05 February 2026 02:28:36 +0000 (0:00:00.194) 0:00:33.209 ***** 2026-02-05 02:28:42.289572 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-05 02:28:42.289585 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-05 02:28:42.289592 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-05 02:28:42.289599 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-05 02:28:42.289606 | orchestrator | 2026-02-05 02:28:42.289612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289619 | orchestrator | Thursday 05 February 2026 02:28:37 +0000 (0:00:00.640) 0:00:33.850 ***** 2026-02-05 02:28:42.289626 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289633 | orchestrator | 2026-02-05 02:28:42.289639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289646 | orchestrator | Thursday 05 February 2026 02:28:37 +0000 (0:00:00.208) 0:00:34.058 ***** 2026-02-05 02:28:42.289653 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289660 | orchestrator | 2026-02-05 02:28:42.289666 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289673 | orchestrator | Thursday 05 February 2026 02:28:37 +0000 (0:00:00.222) 0:00:34.280 ***** 2026-02-05 02:28:42.289680 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289687 | orchestrator | 2026-02-05 02:28:42.289694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:42.289700 | orchestrator | Thursday 05 February 2026 02:28:38 +0000 (0:00:00.208) 0:00:34.489 ***** 2026-02-05 02:28:42.289707 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289714 | orchestrator | 2026-02-05 02:28:42.289721 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-05 02:28:42.289727 | orchestrator | Thursday 05 February 2026 02:28:38 +0000 (0:00:00.208) 0:00:34.697 ***** 2026-02-05 02:28:42.289734 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289740 | orchestrator | 2026-02-05 02:28:42.289747 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-05 02:28:42.289753 | orchestrator | Thursday 05 February 2026 02:28:38 +0000 (0:00:00.138) 0:00:34.836 ***** 2026-02-05 02:28:42.289759 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '599b5b3c-37df-591b-a248-24d26d466625'}}) 2026-02-05 02:28:42.289766 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}}) 2026-02-05 02:28:42.289773 | orchestrator | 2026-02-05 02:28:42.289779 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-05 02:28:42.289786 | orchestrator | Thursday 05 February 2026 02:28:38 +0000 (0:00:00.418) 0:00:35.255 ***** 2026-02-05 02:28:42.289793 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'}) 2026-02-05 02:28:42.289800 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}) 2026-02-05 02:28:42.289806 | orchestrator | 2026-02-05 02:28:42.289812 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-05 02:28:42.289818 | orchestrator | Thursday 05 February 2026 02:28:40 +0000 (0:00:01.902) 0:00:37.158 ***** 2026-02-05 02:28:42.289824 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:42.289831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:42.289837 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:42.289843 | orchestrator | 2026-02-05 02:28:42.289848 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-05 02:28:42.289854 | orchestrator | Thursday 05 February 2026 02:28:40 +0000 (0:00:00.152) 0:00:37.311 ***** 2026-02-05 02:28:42.289859 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'}) 2026-02-05 02:28:42.289874 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}) 2026-02-05 02:28:47.997217 | orchestrator | 2026-02-05 02:28:47.997293 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-05 02:28:47.997300 | orchestrator | Thursday 05 February 2026 02:28:42 +0000 (0:00:01.396) 0:00:38.708 ***** 2026-02-05 02:28:47.997306 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:47.997312 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:47.997316 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997322 | orchestrator | 2026-02-05 02:28:47.997337 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-05 02:28:47.997342 | orchestrator | Thursday 05 February 2026 02:28:42 +0000 (0:00:00.143) 0:00:38.851 ***** 2026-02-05 02:28:47.997346 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997350 | orchestrator | 2026-02-05 02:28:47.997354 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-05 02:28:47.997358 | orchestrator | Thursday 05 February 2026 02:28:42 +0000 (0:00:00.131) 0:00:38.983 ***** 2026-02-05 02:28:47.997363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:47.997367 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:47.997371 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997375 | orchestrator | 2026-02-05 02:28:47.997379 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-05 02:28:47.997384 | orchestrator | Thursday 05 February 2026 02:28:42 +0000 (0:00:00.165) 0:00:39.148 ***** 2026-02-05 02:28:47.997388 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997392 | orchestrator | 2026-02-05 02:28:47.997396 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-05 02:28:47.997400 | orchestrator | Thursday 05 February 2026 02:28:42 +0000 (0:00:00.142) 0:00:39.291 ***** 2026-02-05 02:28:47.997404 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:47.997409 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:47.997413 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997418 | orchestrator | 2026-02-05 02:28:47.997422 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-05 02:28:47.997426 | orchestrator | Thursday 05 February 2026 02:28:43 +0000 (0:00:00.157) 0:00:39.449 ***** 2026-02-05 02:28:47.997430 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997435 | orchestrator | 2026-02-05 02:28:47.997439 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-05 02:28:47.997443 | orchestrator | Thursday 05 February 2026 02:28:43 +0000 (0:00:00.137) 0:00:39.587 ***** 2026-02-05 02:28:47.997447 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:47.997452 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:47.997456 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997460 | orchestrator | 2026-02-05 02:28:47.997464 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-05 02:28:47.997490 | orchestrator | Thursday 05 February 2026 02:28:43 +0000 (0:00:00.153) 0:00:39.741 ***** 2026-02-05 02:28:47.997495 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:28:47.997500 | orchestrator | 2026-02-05 02:28:47.997504 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-05 02:28:47.997508 | orchestrator | Thursday 05 February 2026 02:28:43 +0000 (0:00:00.142) 0:00:39.884 ***** 2026-02-05 02:28:47.997513 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:47.997517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:47.997521 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997525 | orchestrator | 2026-02-05 02:28:47.997529 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-05 02:28:47.997533 | orchestrator | Thursday 05 February 2026 02:28:43 +0000 (0:00:00.372) 0:00:40.256 ***** 2026-02-05 02:28:47.997538 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:47.997542 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:47.997546 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997550 | orchestrator | 2026-02-05 02:28:47.997554 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-05 02:28:47.997568 | orchestrator | Thursday 05 February 2026 02:28:43 +0000 (0:00:00.169) 0:00:40.426 ***** 2026-02-05 02:28:47.997572 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:47.997577 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:47.997581 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997585 | orchestrator | 2026-02-05 02:28:47.997589 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-05 02:28:47.997593 | orchestrator | Thursday 05 February 2026 02:28:44 +0000 (0:00:00.165) 0:00:40.592 ***** 2026-02-05 02:28:47.997600 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997604 | orchestrator | 2026-02-05 02:28:47.997609 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-05 02:28:47.997613 | orchestrator | Thursday 05 February 2026 02:28:44 +0000 (0:00:00.149) 0:00:40.741 ***** 2026-02-05 02:28:47.997617 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997621 | orchestrator | 2026-02-05 02:28:47.997625 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-05 02:28:47.997629 | orchestrator | Thursday 05 February 2026 02:28:44 +0000 (0:00:00.136) 0:00:40.877 ***** 2026-02-05 02:28:47.997633 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997637 | orchestrator | 2026-02-05 02:28:47.997642 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-05 02:28:47.997646 | orchestrator | Thursday 05 February 2026 02:28:44 +0000 (0:00:00.126) 0:00:41.004 ***** 2026-02-05 02:28:47.997650 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 02:28:47.997654 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-05 02:28:47.997659 | orchestrator | } 2026-02-05 02:28:47.997663 | orchestrator | 2026-02-05 02:28:47.997668 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-05 02:28:47.997672 | orchestrator | Thursday 05 February 2026 02:28:44 +0000 (0:00:00.157) 0:00:41.162 ***** 2026-02-05 02:28:47.997676 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 02:28:47.997680 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-05 02:28:47.997688 | orchestrator | } 2026-02-05 02:28:47.997693 | orchestrator | 2026-02-05 02:28:47.997697 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-05 02:28:47.997701 | orchestrator | Thursday 05 February 2026 02:28:44 +0000 (0:00:00.163) 0:00:41.326 ***** 2026-02-05 02:28:47.997705 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 02:28:47.997709 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-05 02:28:47.997713 | orchestrator | } 2026-02-05 02:28:47.997717 | orchestrator | 2026-02-05 02:28:47.997722 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-05 02:28:47.997726 | orchestrator | Thursday 05 February 2026 02:28:45 +0000 (0:00:00.162) 0:00:41.488 ***** 2026-02-05 02:28:47.997730 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:28:47.997734 | orchestrator | 2026-02-05 02:28:47.997738 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-05 02:28:47.997742 | orchestrator | Thursday 05 February 2026 02:28:45 +0000 (0:00:00.543) 0:00:42.032 ***** 2026-02-05 02:28:47.997746 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:28:47.997750 | orchestrator | 2026-02-05 02:28:47.997755 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-05 02:28:47.997759 | orchestrator | Thursday 05 February 2026 02:28:46 +0000 (0:00:00.517) 0:00:42.549 ***** 2026-02-05 02:28:47.997763 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:28:47.997767 | orchestrator | 2026-02-05 02:28:47.997771 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-05 02:28:47.997776 | orchestrator | Thursday 05 February 2026 02:28:46 +0000 (0:00:00.588) 0:00:43.138 ***** 2026-02-05 02:28:47.997781 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:28:47.997786 | orchestrator | 2026-02-05 02:28:47.997791 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-05 02:28:47.997795 | orchestrator | Thursday 05 February 2026 02:28:47 +0000 (0:00:00.336) 0:00:43.474 ***** 2026-02-05 02:28:47.997800 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997805 | orchestrator | 2026-02-05 02:28:47.997810 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-05 02:28:47.997815 | orchestrator | Thursday 05 February 2026 02:28:47 +0000 (0:00:00.118) 0:00:43.593 ***** 2026-02-05 02:28:47.997822 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997829 | orchestrator | 2026-02-05 02:28:47.997836 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-05 02:28:47.997843 | orchestrator | Thursday 05 February 2026 02:28:47 +0000 (0:00:00.113) 0:00:43.706 ***** 2026-02-05 02:28:47.997849 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 02:28:47.997856 | orchestrator |  "vgs_report": { 2026-02-05 02:28:47.997863 | orchestrator |  "vg": [] 2026-02-05 02:28:47.997870 | orchestrator |  } 2026-02-05 02:28:47.997877 | orchestrator | } 2026-02-05 02:28:47.997883 | orchestrator | 2026-02-05 02:28:47.997891 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-05 02:28:47.997898 | orchestrator | Thursday 05 February 2026 02:28:47 +0000 (0:00:00.144) 0:00:43.851 ***** 2026-02-05 02:28:47.997905 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997912 | orchestrator | 2026-02-05 02:28:47.997919 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-05 02:28:47.997925 | orchestrator | Thursday 05 February 2026 02:28:47 +0000 (0:00:00.145) 0:00:43.997 ***** 2026-02-05 02:28:47.997930 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997935 | orchestrator | 2026-02-05 02:28:47.997940 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-05 02:28:47.997944 | orchestrator | Thursday 05 February 2026 02:28:47 +0000 (0:00:00.144) 0:00:44.141 ***** 2026-02-05 02:28:47.997949 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997954 | orchestrator | 2026-02-05 02:28:47.997959 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-05 02:28:47.997964 | orchestrator | Thursday 05 February 2026 02:28:47 +0000 (0:00:00.141) 0:00:44.283 ***** 2026-02-05 02:28:47.997974 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:47.997979 | orchestrator | 2026-02-05 02:28:47.997987 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-05 02:28:52.819946 | orchestrator | Thursday 05 February 2026 02:28:47 +0000 (0:00:00.135) 0:00:44.418 ***** 2026-02-05 02:28:52.820080 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.820109 | orchestrator | 2026-02-05 02:28:52.820130 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-05 02:28:52.820149 | orchestrator | Thursday 05 February 2026 02:28:48 +0000 (0:00:00.158) 0:00:44.576 ***** 2026-02-05 02:28:52.820258 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.820279 | orchestrator | 2026-02-05 02:28:52.820297 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-05 02:28:52.820316 | orchestrator | Thursday 05 February 2026 02:28:48 +0000 (0:00:00.132) 0:00:44.709 ***** 2026-02-05 02:28:52.820335 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.820353 | orchestrator | 2026-02-05 02:28:52.820394 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-05 02:28:52.820414 | orchestrator | Thursday 05 February 2026 02:28:48 +0000 (0:00:00.132) 0:00:44.842 ***** 2026-02-05 02:28:52.820432 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.820451 | orchestrator | 2026-02-05 02:28:52.820470 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-05 02:28:52.820489 | orchestrator | Thursday 05 February 2026 02:28:48 +0000 (0:00:00.151) 0:00:44.994 ***** 2026-02-05 02:28:52.820507 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.820525 | orchestrator | 2026-02-05 02:28:52.820543 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-05 02:28:52.820562 | orchestrator | Thursday 05 February 2026 02:28:48 +0000 (0:00:00.339) 0:00:45.333 ***** 2026-02-05 02:28:52.820581 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.820599 | orchestrator | 2026-02-05 02:28:52.820617 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-05 02:28:52.820636 | orchestrator | Thursday 05 February 2026 02:28:49 +0000 (0:00:00.150) 0:00:45.483 ***** 2026-02-05 02:28:52.820654 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.820672 | orchestrator | 2026-02-05 02:28:52.820689 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-05 02:28:52.820707 | orchestrator | Thursday 05 February 2026 02:28:49 +0000 (0:00:00.134) 0:00:45.618 ***** 2026-02-05 02:28:52.820725 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.820742 | orchestrator | 2026-02-05 02:28:52.820758 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-05 02:28:52.820775 | orchestrator | Thursday 05 February 2026 02:28:49 +0000 (0:00:00.146) 0:00:45.765 ***** 2026-02-05 02:28:52.820792 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.820809 | orchestrator | 2026-02-05 02:28:52.820827 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-05 02:28:52.820845 | orchestrator | Thursday 05 February 2026 02:28:49 +0000 (0:00:00.137) 0:00:45.903 ***** 2026-02-05 02:28:52.820864 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.820881 | orchestrator | 2026-02-05 02:28:52.820899 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-05 02:28:52.820915 | orchestrator | Thursday 05 February 2026 02:28:49 +0000 (0:00:00.143) 0:00:46.047 ***** 2026-02-05 02:28:52.820935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:52.820955 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:52.820975 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.820994 | orchestrator | 2026-02-05 02:28:52.821012 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-05 02:28:52.821084 | orchestrator | Thursday 05 February 2026 02:28:49 +0000 (0:00:00.164) 0:00:46.211 ***** 2026-02-05 02:28:52.821106 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:52.821126 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:52.821145 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.821189 | orchestrator | 2026-02-05 02:28:52.821208 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-05 02:28:52.821226 | orchestrator | Thursday 05 February 2026 02:28:49 +0000 (0:00:00.157) 0:00:46.369 ***** 2026-02-05 02:28:52.821245 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:52.821264 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:52.821281 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.821301 | orchestrator | 2026-02-05 02:28:52.821319 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-05 02:28:52.821337 | orchestrator | Thursday 05 February 2026 02:28:50 +0000 (0:00:00.154) 0:00:46.524 ***** 2026-02-05 02:28:52.821355 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:52.821374 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:52.821392 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.821410 | orchestrator | 2026-02-05 02:28:52.821455 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-05 02:28:52.821474 | orchestrator | Thursday 05 February 2026 02:28:50 +0000 (0:00:00.157) 0:00:46.681 ***** 2026-02-05 02:28:52.821492 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:52.821511 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:52.821529 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.821547 | orchestrator | 2026-02-05 02:28:52.821571 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-05 02:28:52.821582 | orchestrator | Thursday 05 February 2026 02:28:50 +0000 (0:00:00.169) 0:00:46.850 ***** 2026-02-05 02:28:52.821593 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:52.821604 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:52.821615 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.821626 | orchestrator | 2026-02-05 02:28:52.821637 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-05 02:28:52.821647 | orchestrator | Thursday 05 February 2026 02:28:50 +0000 (0:00:00.187) 0:00:47.038 ***** 2026-02-05 02:28:52.821658 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:52.821669 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:52.821680 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.821705 | orchestrator | 2026-02-05 02:28:52.821716 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-05 02:28:52.821727 | orchestrator | Thursday 05 February 2026 02:28:50 +0000 (0:00:00.351) 0:00:47.389 ***** 2026-02-05 02:28:52.821738 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:52.821748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:52.821759 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.821770 | orchestrator | 2026-02-05 02:28:52.821780 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-05 02:28:52.821791 | orchestrator | Thursday 05 February 2026 02:28:51 +0000 (0:00:00.156) 0:00:47.546 ***** 2026-02-05 02:28:52.821802 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:28:52.821813 | orchestrator | 2026-02-05 02:28:52.821824 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-05 02:28:52.821834 | orchestrator | Thursday 05 February 2026 02:28:51 +0000 (0:00:00.532) 0:00:48.078 ***** 2026-02-05 02:28:52.821845 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:28:52.821856 | orchestrator | 2026-02-05 02:28:52.821866 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-05 02:28:52.821877 | orchestrator | Thursday 05 February 2026 02:28:52 +0000 (0:00:00.519) 0:00:48.598 ***** 2026-02-05 02:28:52.821888 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:28:52.821898 | orchestrator | 2026-02-05 02:28:52.821909 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-05 02:28:52.821920 | orchestrator | Thursday 05 February 2026 02:28:52 +0000 (0:00:00.163) 0:00:48.762 ***** 2026-02-05 02:28:52.821931 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'vg_name': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'}) 2026-02-05 02:28:52.821943 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'vg_name': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}) 2026-02-05 02:28:52.821954 | orchestrator | 2026-02-05 02:28:52.821965 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-05 02:28:52.821976 | orchestrator | Thursday 05 February 2026 02:28:52 +0000 (0:00:00.166) 0:00:48.928 ***** 2026-02-05 02:28:52.821986 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:52.821998 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:52.822008 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:52.822080 | orchestrator | 2026-02-05 02:28:52.822094 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-05 02:28:52.822105 | orchestrator | Thursday 05 February 2026 02:28:52 +0000 (0:00:00.161) 0:00:49.090 ***** 2026-02-05 02:28:52.822116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:52.822136 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:59.256039 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:59.256153 | orchestrator | 2026-02-05 02:28:59.256239 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-05 02:28:59.256254 | orchestrator | Thursday 05 February 2026 02:28:52 +0000 (0:00:00.154) 0:00:49.244 ***** 2026-02-05 02:28:59.256267 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 02:28:59.256321 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 02:28:59.256334 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:28:59.256346 | orchestrator | 2026-02-05 02:28:59.256357 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-05 02:28:59.256368 | orchestrator | Thursday 05 February 2026 02:28:52 +0000 (0:00:00.170) 0:00:49.415 ***** 2026-02-05 02:28:59.256379 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 02:28:59.256390 | orchestrator |  "lvm_report": { 2026-02-05 02:28:59.256403 | orchestrator |  "lv": [ 2026-02-05 02:28:59.256414 | orchestrator |  { 2026-02-05 02:28:59.256425 | orchestrator |  "lv_name": "osd-block-599b5b3c-37df-591b-a248-24d26d466625", 2026-02-05 02:28:59.256437 | orchestrator |  "vg_name": "ceph-599b5b3c-37df-591b-a248-24d26d466625" 2026-02-05 02:28:59.256448 | orchestrator |  }, 2026-02-05 02:28:59.256459 | orchestrator |  { 2026-02-05 02:28:59.256470 | orchestrator |  "lv_name": "osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c", 2026-02-05 02:28:59.256481 | orchestrator |  "vg_name": "ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c" 2026-02-05 02:28:59.256492 | orchestrator |  } 2026-02-05 02:28:59.256503 | orchestrator |  ], 2026-02-05 02:28:59.256514 | orchestrator |  "pv": [ 2026-02-05 02:28:59.256525 | orchestrator |  { 2026-02-05 02:28:59.256536 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-05 02:28:59.256547 | orchestrator |  "vg_name": "ceph-599b5b3c-37df-591b-a248-24d26d466625" 2026-02-05 02:28:59.256559 | orchestrator |  }, 2026-02-05 02:28:59.256571 | orchestrator |  { 2026-02-05 02:28:59.256584 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-05 02:28:59.256596 | orchestrator |  "vg_name": "ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c" 2026-02-05 02:28:59.256623 | orchestrator |  } 2026-02-05 02:28:59.256647 | orchestrator |  ] 2026-02-05 02:28:59.256660 | orchestrator |  } 2026-02-05 02:28:59.256673 | orchestrator | } 2026-02-05 02:28:59.256686 | orchestrator | 2026-02-05 02:28:59.256700 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-05 02:28:59.256712 | orchestrator | 2026-02-05 02:28:59.256725 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 02:28:59.256738 | orchestrator | Thursday 05 February 2026 02:28:53 +0000 (0:00:00.294) 0:00:49.709 ***** 2026-02-05 02:28:59.256750 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-05 02:28:59.256763 | orchestrator | 2026-02-05 02:28:59.256776 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 02:28:59.256789 | orchestrator | Thursday 05 February 2026 02:28:53 +0000 (0:00:00.693) 0:00:50.402 ***** 2026-02-05 02:28:59.256802 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:28:59.256814 | orchestrator | 2026-02-05 02:28:59.256827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.256840 | orchestrator | Thursday 05 February 2026 02:28:54 +0000 (0:00:00.245) 0:00:50.648 ***** 2026-02-05 02:28:59.256853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-05 02:28:59.256865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-05 02:28:59.256878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-05 02:28:59.256890 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-05 02:28:59.256903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-05 02:28:59.256916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-05 02:28:59.256929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-05 02:28:59.256949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-05 02:28:59.256960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-05 02:28:59.256971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-05 02:28:59.256982 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-05 02:28:59.256993 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-05 02:28:59.257004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-05 02:28:59.257015 | orchestrator | 2026-02-05 02:28:59.257025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257036 | orchestrator | Thursday 05 February 2026 02:28:54 +0000 (0:00:00.413) 0:00:51.061 ***** 2026-02-05 02:28:59.257047 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:28:59.257058 | orchestrator | 2026-02-05 02:28:59.257069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257080 | orchestrator | Thursday 05 February 2026 02:28:54 +0000 (0:00:00.220) 0:00:51.282 ***** 2026-02-05 02:28:59.257091 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:28:59.257102 | orchestrator | 2026-02-05 02:28:59.257113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257142 | orchestrator | Thursday 05 February 2026 02:28:55 +0000 (0:00:00.203) 0:00:51.485 ***** 2026-02-05 02:28:59.257154 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:28:59.257180 | orchestrator | 2026-02-05 02:28:59.257192 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257203 | orchestrator | Thursday 05 February 2026 02:28:55 +0000 (0:00:00.198) 0:00:51.684 ***** 2026-02-05 02:28:59.257214 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:28:59.257224 | orchestrator | 2026-02-05 02:28:59.257235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257246 | orchestrator | Thursday 05 February 2026 02:28:55 +0000 (0:00:00.214) 0:00:51.899 ***** 2026-02-05 02:28:59.257258 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:28:59.257280 | orchestrator | 2026-02-05 02:28:59.257291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257302 | orchestrator | Thursday 05 February 2026 02:28:55 +0000 (0:00:00.204) 0:00:52.104 ***** 2026-02-05 02:28:59.257313 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:28:59.257324 | orchestrator | 2026-02-05 02:28:59.257335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257346 | orchestrator | Thursday 05 February 2026 02:28:55 +0000 (0:00:00.200) 0:00:52.304 ***** 2026-02-05 02:28:59.257357 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:28:59.257368 | orchestrator | 2026-02-05 02:28:59.257379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257390 | orchestrator | Thursday 05 February 2026 02:28:56 +0000 (0:00:00.203) 0:00:52.508 ***** 2026-02-05 02:28:59.257401 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:28:59.257412 | orchestrator | 2026-02-05 02:28:59.257422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257433 | orchestrator | Thursday 05 February 2026 02:28:56 +0000 (0:00:00.205) 0:00:52.713 ***** 2026-02-05 02:28:59.257444 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa) 2026-02-05 02:28:59.257456 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa) 2026-02-05 02:28:59.257467 | orchestrator | 2026-02-05 02:28:59.257478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257489 | orchestrator | Thursday 05 February 2026 02:28:57 +0000 (0:00:00.826) 0:00:53.540 ***** 2026-02-05 02:28:59.257599 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9) 2026-02-05 02:28:59.257624 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9) 2026-02-05 02:28:59.257635 | orchestrator | 2026-02-05 02:28:59.257646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257657 | orchestrator | Thursday 05 February 2026 02:28:57 +0000 (0:00:00.427) 0:00:53.967 ***** 2026-02-05 02:28:59.257668 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49) 2026-02-05 02:28:59.257679 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49) 2026-02-05 02:28:59.257690 | orchestrator | 2026-02-05 02:28:59.257701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257712 | orchestrator | Thursday 05 February 2026 02:28:58 +0000 (0:00:00.472) 0:00:54.440 ***** 2026-02-05 02:28:59.257723 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc) 2026-02-05 02:28:59.257734 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc) 2026-02-05 02:28:59.257745 | orchestrator | 2026-02-05 02:28:59.257756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 02:28:59.257766 | orchestrator | Thursday 05 February 2026 02:28:58 +0000 (0:00:00.451) 0:00:54.891 ***** 2026-02-05 02:28:59.257777 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 02:28:59.257788 | orchestrator | 2026-02-05 02:28:59.257799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:28:59.257810 | orchestrator | Thursday 05 February 2026 02:28:58 +0000 (0:00:00.344) 0:00:55.235 ***** 2026-02-05 02:28:59.257821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-05 02:28:59.257832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-05 02:28:59.257842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-05 02:28:59.257853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-05 02:28:59.257864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-05 02:28:59.257875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-05 02:28:59.257886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-05 02:28:59.257897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-05 02:28:59.257908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-05 02:28:59.257927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-05 02:28:59.257945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-05 02:28:59.257976 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-05 02:29:08.093827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-05 02:29:08.093949 | orchestrator | 2026-02-05 02:29:08.093973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.093992 | orchestrator | Thursday 05 February 2026 02:28:59 +0000 (0:00:00.437) 0:00:55.673 ***** 2026-02-05 02:29:08.094010 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094102 | orchestrator | 2026-02-05 02:29:08.094122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.094160 | orchestrator | Thursday 05 February 2026 02:28:59 +0000 (0:00:00.218) 0:00:55.892 ***** 2026-02-05 02:29:08.094212 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094251 | orchestrator | 2026-02-05 02:29:08.094268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.094289 | orchestrator | Thursday 05 February 2026 02:28:59 +0000 (0:00:00.203) 0:00:56.095 ***** 2026-02-05 02:29:08.094307 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094318 | orchestrator | 2026-02-05 02:29:08.094330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.094349 | orchestrator | Thursday 05 February 2026 02:28:59 +0000 (0:00:00.194) 0:00:56.289 ***** 2026-02-05 02:29:08.094370 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094387 | orchestrator | 2026-02-05 02:29:08.094401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.094419 | orchestrator | Thursday 05 February 2026 02:29:00 +0000 (0:00:00.198) 0:00:56.488 ***** 2026-02-05 02:29:08.094436 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094451 | orchestrator | 2026-02-05 02:29:08.094469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.094488 | orchestrator | Thursday 05 February 2026 02:29:00 +0000 (0:00:00.666) 0:00:57.154 ***** 2026-02-05 02:29:08.094502 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094515 | orchestrator | 2026-02-05 02:29:08.094529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.094543 | orchestrator | Thursday 05 February 2026 02:29:00 +0000 (0:00:00.214) 0:00:57.368 ***** 2026-02-05 02:29:08.094556 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094567 | orchestrator | 2026-02-05 02:29:08.094578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.094589 | orchestrator | Thursday 05 February 2026 02:29:01 +0000 (0:00:00.217) 0:00:57.586 ***** 2026-02-05 02:29:08.094600 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094611 | orchestrator | 2026-02-05 02:29:08.094622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.094633 | orchestrator | Thursday 05 February 2026 02:29:01 +0000 (0:00:00.206) 0:00:57.792 ***** 2026-02-05 02:29:08.094644 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-05 02:29:08.094656 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-05 02:29:08.094667 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-05 02:29:08.094679 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-05 02:29:08.094690 | orchestrator | 2026-02-05 02:29:08.094701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.094712 | orchestrator | Thursday 05 February 2026 02:29:02 +0000 (0:00:00.684) 0:00:58.477 ***** 2026-02-05 02:29:08.094723 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094734 | orchestrator | 2026-02-05 02:29:08.094745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.094755 | orchestrator | Thursday 05 February 2026 02:29:02 +0000 (0:00:00.206) 0:00:58.683 ***** 2026-02-05 02:29:08.094766 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094777 | orchestrator | 2026-02-05 02:29:08.094788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.094799 | orchestrator | Thursday 05 February 2026 02:29:02 +0000 (0:00:00.218) 0:00:58.902 ***** 2026-02-05 02:29:08.094809 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094820 | orchestrator | 2026-02-05 02:29:08.094831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 02:29:08.094842 | orchestrator | Thursday 05 February 2026 02:29:02 +0000 (0:00:00.206) 0:00:59.109 ***** 2026-02-05 02:29:08.094853 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094864 | orchestrator | 2026-02-05 02:29:08.094875 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-05 02:29:08.094905 | orchestrator | Thursday 05 February 2026 02:29:02 +0000 (0:00:00.201) 0:00:59.311 ***** 2026-02-05 02:29:08.094916 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.094927 | orchestrator | 2026-02-05 02:29:08.094947 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-05 02:29:08.094958 | orchestrator | Thursday 05 February 2026 02:29:03 +0000 (0:00:00.137) 0:00:59.448 ***** 2026-02-05 02:29:08.094970 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '27670a2c-7838-5627-a951-e8a6d97fe4be'}}) 2026-02-05 02:29:08.094982 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '51c61bf5-abad-542f-be8e-c69d5e860565'}}) 2026-02-05 02:29:08.094993 | orchestrator | 2026-02-05 02:29:08.095005 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-05 02:29:08.095015 | orchestrator | Thursday 05 February 2026 02:29:03 +0000 (0:00:00.192) 0:00:59.641 ***** 2026-02-05 02:29:08.095027 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'}) 2026-02-05 02:29:08.095039 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'}) 2026-02-05 02:29:08.095050 | orchestrator | 2026-02-05 02:29:08.095061 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-05 02:29:08.095093 | orchestrator | Thursday 05 February 2026 02:29:05 +0000 (0:00:01.800) 0:01:01.442 ***** 2026-02-05 02:29:08.095105 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:08.095118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:08.095129 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.095140 | orchestrator | 2026-02-05 02:29:08.095157 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-05 02:29:08.095234 | orchestrator | Thursday 05 February 2026 02:29:05 +0000 (0:00:00.383) 0:01:01.825 ***** 2026-02-05 02:29:08.095255 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'}) 2026-02-05 02:29:08.095272 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'}) 2026-02-05 02:29:08.095284 | orchestrator | 2026-02-05 02:29:08.095295 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-05 02:29:08.095306 | orchestrator | Thursday 05 February 2026 02:29:06 +0000 (0:00:01.332) 0:01:03.157 ***** 2026-02-05 02:29:08.095317 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:08.095328 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:08.095340 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.095351 | orchestrator | 2026-02-05 02:29:08.095364 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-05 02:29:08.095383 | orchestrator | Thursday 05 February 2026 02:29:06 +0000 (0:00:00.148) 0:01:03.306 ***** 2026-02-05 02:29:08.095398 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.095427 | orchestrator | 2026-02-05 02:29:08.095446 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-05 02:29:08.095464 | orchestrator | Thursday 05 February 2026 02:29:07 +0000 (0:00:00.142) 0:01:03.448 ***** 2026-02-05 02:29:08.095482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:08.095500 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:08.095530 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.095549 | orchestrator | 2026-02-05 02:29:08.095567 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-05 02:29:08.095586 | orchestrator | Thursday 05 February 2026 02:29:07 +0000 (0:00:00.155) 0:01:03.604 ***** 2026-02-05 02:29:08.095605 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.095622 | orchestrator | 2026-02-05 02:29:08.095640 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-05 02:29:08.095658 | orchestrator | Thursday 05 February 2026 02:29:07 +0000 (0:00:00.144) 0:01:03.748 ***** 2026-02-05 02:29:08.095676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:08.095693 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:08.095712 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.095731 | orchestrator | 2026-02-05 02:29:08.095750 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-05 02:29:08.095767 | orchestrator | Thursday 05 February 2026 02:29:07 +0000 (0:00:00.161) 0:01:03.909 ***** 2026-02-05 02:29:08.095786 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.095803 | orchestrator | 2026-02-05 02:29:08.095824 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-05 02:29:08.095842 | orchestrator | Thursday 05 February 2026 02:29:07 +0000 (0:00:00.142) 0:01:04.052 ***** 2026-02-05 02:29:08.095861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:08.095880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:08.095899 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:08.095918 | orchestrator | 2026-02-05 02:29:08.095937 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-05 02:29:08.095955 | orchestrator | Thursday 05 February 2026 02:29:07 +0000 (0:00:00.153) 0:01:04.206 ***** 2026-02-05 02:29:08.095973 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:29:08.095992 | orchestrator | 2026-02-05 02:29:08.096011 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-05 02:29:08.096031 | orchestrator | Thursday 05 February 2026 02:29:07 +0000 (0:00:00.152) 0:01:04.359 ***** 2026-02-05 02:29:08.096067 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:14.425970 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:14.426136 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.426154 | orchestrator | 2026-02-05 02:29:14.426194 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-05 02:29:14.426209 | orchestrator | Thursday 05 February 2026 02:29:08 +0000 (0:00:00.158) 0:01:04.517 ***** 2026-02-05 02:29:14.426236 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:14.426248 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:14.426259 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.426270 | orchestrator | 2026-02-05 02:29:14.426281 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-05 02:29:14.426292 | orchestrator | Thursday 05 February 2026 02:29:08 +0000 (0:00:00.160) 0:01:04.678 ***** 2026-02-05 02:29:14.426329 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:14.426341 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:14.426352 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.426362 | orchestrator | 2026-02-05 02:29:14.426374 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-05 02:29:14.426384 | orchestrator | Thursday 05 February 2026 02:29:08 +0000 (0:00:00.346) 0:01:05.025 ***** 2026-02-05 02:29:14.426395 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.426406 | orchestrator | 2026-02-05 02:29:14.426417 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-05 02:29:14.426428 | orchestrator | Thursday 05 February 2026 02:29:08 +0000 (0:00:00.145) 0:01:05.170 ***** 2026-02-05 02:29:14.426439 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.426450 | orchestrator | 2026-02-05 02:29:14.426461 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-05 02:29:14.426472 | orchestrator | Thursday 05 February 2026 02:29:08 +0000 (0:00:00.140) 0:01:05.310 ***** 2026-02-05 02:29:14.426483 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.426493 | orchestrator | 2026-02-05 02:29:14.426504 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-05 02:29:14.426517 | orchestrator | Thursday 05 February 2026 02:29:09 +0000 (0:00:00.142) 0:01:05.452 ***** 2026-02-05 02:29:14.426530 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 02:29:14.426544 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-05 02:29:14.426558 | orchestrator | } 2026-02-05 02:29:14.426570 | orchestrator | 2026-02-05 02:29:14.426583 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-05 02:29:14.426596 | orchestrator | Thursday 05 February 2026 02:29:09 +0000 (0:00:00.161) 0:01:05.614 ***** 2026-02-05 02:29:14.426609 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 02:29:14.426622 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-05 02:29:14.426635 | orchestrator | } 2026-02-05 02:29:14.426648 | orchestrator | 2026-02-05 02:29:14.426661 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-05 02:29:14.426674 | orchestrator | Thursday 05 February 2026 02:29:09 +0000 (0:00:00.146) 0:01:05.761 ***** 2026-02-05 02:29:14.426687 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 02:29:14.426700 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-05 02:29:14.426713 | orchestrator | } 2026-02-05 02:29:14.426726 | orchestrator | 2026-02-05 02:29:14.426738 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-05 02:29:14.426751 | orchestrator | Thursday 05 February 2026 02:29:09 +0000 (0:00:00.146) 0:01:05.908 ***** 2026-02-05 02:29:14.426764 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:29:14.426777 | orchestrator | 2026-02-05 02:29:14.426790 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-05 02:29:14.426806 | orchestrator | Thursday 05 February 2026 02:29:10 +0000 (0:00:00.535) 0:01:06.443 ***** 2026-02-05 02:29:14.426825 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:29:14.426845 | orchestrator | 2026-02-05 02:29:14.426872 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-05 02:29:14.426894 | orchestrator | Thursday 05 February 2026 02:29:10 +0000 (0:00:00.538) 0:01:06.982 ***** 2026-02-05 02:29:14.426912 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:29:14.426931 | orchestrator | 2026-02-05 02:29:14.426948 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-05 02:29:14.426967 | orchestrator | Thursday 05 February 2026 02:29:11 +0000 (0:00:00.505) 0:01:07.487 ***** 2026-02-05 02:29:14.426986 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:29:14.427013 | orchestrator | 2026-02-05 02:29:14.427035 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-05 02:29:14.427065 | orchestrator | Thursday 05 February 2026 02:29:11 +0000 (0:00:00.149) 0:01:07.636 ***** 2026-02-05 02:29:14.427085 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427119 | orchestrator | 2026-02-05 02:29:14.427137 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-05 02:29:14.427155 | orchestrator | Thursday 05 February 2026 02:29:11 +0000 (0:00:00.106) 0:01:07.742 ***** 2026-02-05 02:29:14.427201 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427220 | orchestrator | 2026-02-05 02:29:14.427238 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-05 02:29:14.427256 | orchestrator | Thursday 05 February 2026 02:29:11 +0000 (0:00:00.305) 0:01:08.048 ***** 2026-02-05 02:29:14.427274 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 02:29:14.427292 | orchestrator |  "vgs_report": { 2026-02-05 02:29:14.427312 | orchestrator |  "vg": [] 2026-02-05 02:29:14.427355 | orchestrator |  } 2026-02-05 02:29:14.427376 | orchestrator | } 2026-02-05 02:29:14.427393 | orchestrator | 2026-02-05 02:29:14.427404 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-05 02:29:14.427415 | orchestrator | Thursday 05 February 2026 02:29:11 +0000 (0:00:00.147) 0:01:08.195 ***** 2026-02-05 02:29:14.427426 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427437 | orchestrator | 2026-02-05 02:29:14.427448 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-05 02:29:14.427459 | orchestrator | Thursday 05 February 2026 02:29:11 +0000 (0:00:00.141) 0:01:08.337 ***** 2026-02-05 02:29:14.427479 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427490 | orchestrator | 2026-02-05 02:29:14.427501 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-05 02:29:14.427512 | orchestrator | Thursday 05 February 2026 02:29:12 +0000 (0:00:00.145) 0:01:08.483 ***** 2026-02-05 02:29:14.427522 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427533 | orchestrator | 2026-02-05 02:29:14.427544 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-05 02:29:14.427555 | orchestrator | Thursday 05 February 2026 02:29:12 +0000 (0:00:00.141) 0:01:08.624 ***** 2026-02-05 02:29:14.427566 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427576 | orchestrator | 2026-02-05 02:29:14.427587 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-05 02:29:14.427598 | orchestrator | Thursday 05 February 2026 02:29:12 +0000 (0:00:00.134) 0:01:08.758 ***** 2026-02-05 02:29:14.427608 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427619 | orchestrator | 2026-02-05 02:29:14.427630 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-05 02:29:14.427641 | orchestrator | Thursday 05 February 2026 02:29:12 +0000 (0:00:00.143) 0:01:08.901 ***** 2026-02-05 02:29:14.427651 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427662 | orchestrator | 2026-02-05 02:29:14.427673 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-05 02:29:14.427684 | orchestrator | Thursday 05 February 2026 02:29:12 +0000 (0:00:00.137) 0:01:09.039 ***** 2026-02-05 02:29:14.427694 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427705 | orchestrator | 2026-02-05 02:29:14.427716 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-05 02:29:14.427727 | orchestrator | Thursday 05 February 2026 02:29:12 +0000 (0:00:00.129) 0:01:09.168 ***** 2026-02-05 02:29:14.427738 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427748 | orchestrator | 2026-02-05 02:29:14.427759 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-05 02:29:14.427770 | orchestrator | Thursday 05 February 2026 02:29:12 +0000 (0:00:00.143) 0:01:09.311 ***** 2026-02-05 02:29:14.427780 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427791 | orchestrator | 2026-02-05 02:29:14.427802 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-05 02:29:14.427813 | orchestrator | Thursday 05 February 2026 02:29:13 +0000 (0:00:00.141) 0:01:09.453 ***** 2026-02-05 02:29:14.427834 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427845 | orchestrator | 2026-02-05 02:29:14.427856 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-05 02:29:14.427866 | orchestrator | Thursday 05 February 2026 02:29:13 +0000 (0:00:00.141) 0:01:09.594 ***** 2026-02-05 02:29:14.427877 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427888 | orchestrator | 2026-02-05 02:29:14.427898 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-05 02:29:14.427909 | orchestrator | Thursday 05 February 2026 02:29:13 +0000 (0:00:00.345) 0:01:09.939 ***** 2026-02-05 02:29:14.427920 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427931 | orchestrator | 2026-02-05 02:29:14.427941 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-05 02:29:14.427952 | orchestrator | Thursday 05 February 2026 02:29:13 +0000 (0:00:00.145) 0:01:10.085 ***** 2026-02-05 02:29:14.427963 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.427974 | orchestrator | 2026-02-05 02:29:14.427984 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-05 02:29:14.427995 | orchestrator | Thursday 05 February 2026 02:29:13 +0000 (0:00:00.149) 0:01:10.234 ***** 2026-02-05 02:29:14.428006 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.428016 | orchestrator | 2026-02-05 02:29:14.428027 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-05 02:29:14.428038 | orchestrator | Thursday 05 February 2026 02:29:13 +0000 (0:00:00.143) 0:01:10.378 ***** 2026-02-05 02:29:14.428049 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:14.428061 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:14.428071 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.428082 | orchestrator | 2026-02-05 02:29:14.428093 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-05 02:29:14.428104 | orchestrator | Thursday 05 February 2026 02:29:14 +0000 (0:00:00.162) 0:01:10.540 ***** 2026-02-05 02:29:14.428115 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:14.428126 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:14.428137 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:14.428148 | orchestrator | 2026-02-05 02:29:14.428159 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-05 02:29:14.428223 | orchestrator | Thursday 05 February 2026 02:29:14 +0000 (0:00:00.149) 0:01:10.690 ***** 2026-02-05 02:29:14.428243 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:17.497644 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:17.497751 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:17.497767 | orchestrator | 2026-02-05 02:29:17.497799 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-05 02:29:17.497812 | orchestrator | Thursday 05 February 2026 02:29:14 +0000 (0:00:00.160) 0:01:10.851 ***** 2026-02-05 02:29:17.497823 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:17.497835 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:17.497868 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:17.497880 | orchestrator | 2026-02-05 02:29:17.497890 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-05 02:29:17.497902 | orchestrator | Thursday 05 February 2026 02:29:14 +0000 (0:00:00.154) 0:01:11.005 ***** 2026-02-05 02:29:17.497913 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:17.497924 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:17.497935 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:17.497945 | orchestrator | 2026-02-05 02:29:17.497956 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-05 02:29:17.497967 | orchestrator | Thursday 05 February 2026 02:29:14 +0000 (0:00:00.156) 0:01:11.161 ***** 2026-02-05 02:29:17.497978 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:17.497989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:17.497999 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:17.498010 | orchestrator | 2026-02-05 02:29:17.498088 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-05 02:29:17.498100 | orchestrator | Thursday 05 February 2026 02:29:14 +0000 (0:00:00.152) 0:01:11.314 ***** 2026-02-05 02:29:17.498111 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:17.498122 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:17.498133 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:17.498143 | orchestrator | 2026-02-05 02:29:17.498154 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-05 02:29:17.498195 | orchestrator | Thursday 05 February 2026 02:29:15 +0000 (0:00:00.158) 0:01:11.473 ***** 2026-02-05 02:29:17.498213 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:17.498227 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:17.498240 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:17.498253 | orchestrator | 2026-02-05 02:29:17.498266 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-05 02:29:17.498279 | orchestrator | Thursday 05 February 2026 02:29:15 +0000 (0:00:00.157) 0:01:11.630 ***** 2026-02-05 02:29:17.498291 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:29:17.498305 | orchestrator | 2026-02-05 02:29:17.498317 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-05 02:29:17.498329 | orchestrator | Thursday 05 February 2026 02:29:15 +0000 (0:00:00.705) 0:01:12.336 ***** 2026-02-05 02:29:17.498342 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:29:17.498354 | orchestrator | 2026-02-05 02:29:17.498367 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-05 02:29:17.498380 | orchestrator | Thursday 05 February 2026 02:29:16 +0000 (0:00:00.524) 0:01:12.861 ***** 2026-02-05 02:29:17.498393 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:29:17.498405 | orchestrator | 2026-02-05 02:29:17.498418 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-05 02:29:17.498430 | orchestrator | Thursday 05 February 2026 02:29:16 +0000 (0:00:00.202) 0:01:13.063 ***** 2026-02-05 02:29:17.498453 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'vg_name': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'}) 2026-02-05 02:29:17.498467 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'vg_name': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'}) 2026-02-05 02:29:17.498480 | orchestrator | 2026-02-05 02:29:17.498494 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-05 02:29:17.498507 | orchestrator | Thursday 05 February 2026 02:29:16 +0000 (0:00:00.185) 0:01:13.249 ***** 2026-02-05 02:29:17.498538 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:17.498557 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:17.498568 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:17.498579 | orchestrator | 2026-02-05 02:29:17.498590 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-05 02:29:17.498601 | orchestrator | Thursday 05 February 2026 02:29:16 +0000 (0:00:00.170) 0:01:13.420 ***** 2026-02-05 02:29:17.498611 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:17.498622 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:17.498633 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:17.498644 | orchestrator | 2026-02-05 02:29:17.498655 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-05 02:29:17.498666 | orchestrator | Thursday 05 February 2026 02:29:17 +0000 (0:00:00.163) 0:01:13.583 ***** 2026-02-05 02:29:17.498677 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 02:29:17.498688 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 02:29:17.498698 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:17.498709 | orchestrator | 2026-02-05 02:29:17.498720 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-05 02:29:17.498731 | orchestrator | Thursday 05 February 2026 02:29:17 +0000 (0:00:00.165) 0:01:13.749 ***** 2026-02-05 02:29:17.498742 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 02:29:17.498753 | orchestrator |  "lvm_report": { 2026-02-05 02:29:17.498764 | orchestrator |  "lv": [ 2026-02-05 02:29:17.498775 | orchestrator |  { 2026-02-05 02:29:17.498786 | orchestrator |  "lv_name": "osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be", 2026-02-05 02:29:17.498797 | orchestrator |  "vg_name": "ceph-27670a2c-7838-5627-a951-e8a6d97fe4be" 2026-02-05 02:29:17.498808 | orchestrator |  }, 2026-02-05 02:29:17.498819 | orchestrator |  { 2026-02-05 02:29:17.498830 | orchestrator |  "lv_name": "osd-block-51c61bf5-abad-542f-be8e-c69d5e860565", 2026-02-05 02:29:17.498841 | orchestrator |  "vg_name": "ceph-51c61bf5-abad-542f-be8e-c69d5e860565" 2026-02-05 02:29:17.498852 | orchestrator |  } 2026-02-05 02:29:17.498862 | orchestrator |  ], 2026-02-05 02:29:17.498873 | orchestrator |  "pv": [ 2026-02-05 02:29:17.498884 | orchestrator |  { 2026-02-05 02:29:17.498895 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-05 02:29:17.498905 | orchestrator |  "vg_name": "ceph-27670a2c-7838-5627-a951-e8a6d97fe4be" 2026-02-05 02:29:17.498916 | orchestrator |  }, 2026-02-05 02:29:17.498927 | orchestrator |  { 2026-02-05 02:29:17.498938 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-05 02:29:17.498960 | orchestrator |  "vg_name": "ceph-51c61bf5-abad-542f-be8e-c69d5e860565" 2026-02-05 02:29:17.498972 | orchestrator |  } 2026-02-05 02:29:17.498982 | orchestrator |  ] 2026-02-05 02:29:17.498993 | orchestrator |  } 2026-02-05 02:29:17.499004 | orchestrator | } 2026-02-05 02:29:17.499015 | orchestrator | 2026-02-05 02:29:17.499026 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:29:17.499037 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-05 02:29:17.499048 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-05 02:29:17.499059 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-05 02:29:17.499070 | orchestrator | 2026-02-05 02:29:17.499081 | orchestrator | 2026-02-05 02:29:17.499092 | orchestrator | 2026-02-05 02:29:17.499103 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:29:17.499113 | orchestrator | Thursday 05 February 2026 02:29:17 +0000 (0:00:00.150) 0:01:13.900 ***** 2026-02-05 02:29:17.499124 | orchestrator | =============================================================================== 2026-02-05 02:29:17.499135 | orchestrator | Create block VGs -------------------------------------------------------- 5.79s 2026-02-05 02:29:17.499145 | orchestrator | Create block LVs -------------------------------------------------------- 4.29s 2026-02-05 02:29:17.499156 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.77s 2026-02-05 02:29:17.499201 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.76s 2026-02-05 02:29:17.499213 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.62s 2026-02-05 02:29:17.499223 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.62s 2026-02-05 02:29:17.499234 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2026-02-05 02:29:17.499245 | orchestrator | Add known links to the list of available block devices ------------------ 1.34s 2026-02-05 02:29:17.499263 | orchestrator | Add known partitions to the list of available block devices ------------- 1.29s 2026-02-05 02:29:17.877343 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.22s 2026-02-05 02:29:17.877449 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-02-05 02:29:17.877463 | orchestrator | Print LVM report data --------------------------------------------------- 0.88s 2026-02-05 02:29:17.877495 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.86s 2026-02-05 02:29:17.877507 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.85s 2026-02-05 02:29:17.877518 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-02-05 02:29:17.877529 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.82s 2026-02-05 02:29:17.877540 | orchestrator | Count OSDs put on ceph_wal_devices defined in lvm_volumes --------------- 0.71s 2026-02-05 02:29:17.877550 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2026-02-05 02:29:17.877561 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.69s 2026-02-05 02:29:17.877572 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-02-05 02:29:30.258376 | orchestrator | 2026-02-05 02:29:30 | INFO  | Task 32dee168-902f-4a4a-8a57-1bd193408137 (facts) was prepared for execution. 2026-02-05 02:29:30.258513 | orchestrator | 2026-02-05 02:29:30 | INFO  | It takes a moment until task 32dee168-902f-4a4a-8a57-1bd193408137 (facts) has been started and output is visible here. 2026-02-05 02:29:43.139132 | orchestrator | 2026-02-05 02:29:43.139248 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-05 02:29:43.139283 | orchestrator | 2026-02-05 02:29:43.139291 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-05 02:29:43.139299 | orchestrator | Thursday 05 February 2026 02:29:34 +0000 (0:00:00.268) 0:00:00.268 ***** 2026-02-05 02:29:43.139306 | orchestrator | ok: [testbed-manager] 2026-02-05 02:29:43.139314 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:29:43.139320 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:29:43.139327 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:29:43.139334 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:29:43.139341 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:29:43.139347 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:29:43.139354 | orchestrator | 2026-02-05 02:29:43.139361 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-05 02:29:43.139368 | orchestrator | Thursday 05 February 2026 02:29:35 +0000 (0:00:01.224) 0:00:01.493 ***** 2026-02-05 02:29:43.139375 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:29:43.139382 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:29:43.139389 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:29:43.139396 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:29:43.139402 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:29:43.139409 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:29:43.139416 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:43.139423 | orchestrator | 2026-02-05 02:29:43.139429 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 02:29:43.139436 | orchestrator | 2026-02-05 02:29:43.139443 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 02:29:43.139450 | orchestrator | Thursday 05 February 2026 02:29:37 +0000 (0:00:01.295) 0:00:02.789 ***** 2026-02-05 02:29:43.139457 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:29:43.139464 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:29:43.139471 | orchestrator | ok: [testbed-manager] 2026-02-05 02:29:43.139477 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:29:43.139484 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:29:43.139491 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:29:43.139498 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:29:43.139504 | orchestrator | 2026-02-05 02:29:43.139511 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-05 02:29:43.139518 | orchestrator | 2026-02-05 02:29:43.139525 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-05 02:29:43.139532 | orchestrator | Thursday 05 February 2026 02:29:42 +0000 (0:00:05.150) 0:00:07.939 ***** 2026-02-05 02:29:43.139538 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:29:43.139545 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:29:43.139552 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:29:43.139558 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:29:43.139565 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:29:43.139572 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:29:43.139579 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:29:43.139585 | orchestrator | 2026-02-05 02:29:43.139592 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:29:43.139599 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:29:43.139607 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:29:43.139614 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:29:43.139621 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:29:43.139628 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:29:43.139641 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:29:43.139648 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:29:43.139655 | orchestrator | 2026-02-05 02:29:43.139661 | orchestrator | 2026-02-05 02:29:43.139668 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:29:43.139688 | orchestrator | Thursday 05 February 2026 02:29:42 +0000 (0:00:00.560) 0:00:08.499 ***** 2026-02-05 02:29:43.139695 | orchestrator | =============================================================================== 2026-02-05 02:29:43.139703 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.15s 2026-02-05 02:29:43.139711 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.30s 2026-02-05 02:29:43.139719 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.23s 2026-02-05 02:29:43.139727 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-02-05 02:29:45.489302 | orchestrator | 2026-02-05 02:29:45 | INFO  | Task b18446b9-de49-464f-963c-5fefec8fb02a (ceph) was prepared for execution. 2026-02-05 02:29:45.489409 | orchestrator | 2026-02-05 02:29:45 | INFO  | It takes a moment until task b18446b9-de49-464f-963c-5fefec8fb02a (ceph) has been started and output is visible here. 2026-02-05 02:30:03.152231 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 02:30:03.152347 | orchestrator | 2.16.14 2026-02-05 02:30:03.152364 | orchestrator | 2026-02-05 02:30:03.152378 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-05 02:30:03.152390 | orchestrator | 2026-02-05 02:30:03.152402 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 02:30:03.152413 | orchestrator | Thursday 05 February 2026 02:29:50 +0000 (0:00:00.832) 0:00:00.832 ***** 2026-02-05 02:30:03.152425 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:30:03.152437 | orchestrator | 2026-02-05 02:30:03.152448 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 02:30:03.152459 | orchestrator | Thursday 05 February 2026 02:29:51 +0000 (0:00:01.211) 0:00:02.043 ***** 2026-02-05 02:30:03.152470 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:03.152482 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:03.152493 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:03.152504 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:03.152515 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:03.152526 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:03.152538 | orchestrator | 2026-02-05 02:30:03.152549 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 02:30:03.152561 | orchestrator | Thursday 05 February 2026 02:29:53 +0000 (0:00:01.319) 0:00:03.363 ***** 2026-02-05 02:30:03.152572 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:03.152583 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:03.152594 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:03.152605 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:03.152616 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:03.152627 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:03.152638 | orchestrator | 2026-02-05 02:30:03.152649 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 02:30:03.152660 | orchestrator | Thursday 05 February 2026 02:29:53 +0000 (0:00:00.764) 0:00:04.128 ***** 2026-02-05 02:30:03.152671 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:03.152682 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:03.152693 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:03.152704 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:03.152741 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:03.152756 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:03.152775 | orchestrator | 2026-02-05 02:30:03.152794 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 02:30:03.152811 | orchestrator | Thursday 05 February 2026 02:29:54 +0000 (0:00:00.968) 0:00:05.096 ***** 2026-02-05 02:30:03.152843 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:03.152861 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:03.152879 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:03.152896 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:03.152914 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:03.152932 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:03.152947 | orchestrator | 2026-02-05 02:30:03.152964 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 02:30:03.152981 | orchestrator | Thursday 05 February 2026 02:29:55 +0000 (0:00:00.676) 0:00:05.772 ***** 2026-02-05 02:30:03.152997 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:03.153013 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:03.153031 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:03.153081 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:03.153099 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:03.153117 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:03.153135 | orchestrator | 2026-02-05 02:30:03.153154 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 02:30:03.153197 | orchestrator | Thursday 05 February 2026 02:29:56 +0000 (0:00:00.503) 0:00:06.276 ***** 2026-02-05 02:30:03.153216 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:03.153235 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:03.153253 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:03.153271 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:03.153290 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:03.153307 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:03.153326 | orchestrator | 2026-02-05 02:30:03.153337 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 02:30:03.153348 | orchestrator | Thursday 05 February 2026 02:29:56 +0000 (0:00:00.745) 0:00:07.022 ***** 2026-02-05 02:30:03.153360 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:03.153371 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:03.153382 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:03.153393 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:03.153404 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:03.153415 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:03.153426 | orchestrator | 2026-02-05 02:30:03.153437 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 02:30:03.153448 | orchestrator | Thursday 05 February 2026 02:29:57 +0000 (0:00:00.575) 0:00:07.597 ***** 2026-02-05 02:30:03.153459 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:03.153470 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:03.153481 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:03.153492 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:03.153502 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:03.153531 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:03.153543 | orchestrator | 2026-02-05 02:30:03.153554 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 02:30:03.153565 | orchestrator | Thursday 05 February 2026 02:29:58 +0000 (0:00:00.683) 0:00:08.281 ***** 2026-02-05 02:30:03.153576 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 02:30:03.153587 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 02:30:03.153598 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 02:30:03.153609 | orchestrator | 2026-02-05 02:30:03.153619 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 02:30:03.153630 | orchestrator | Thursday 05 February 2026 02:29:58 +0000 (0:00:00.602) 0:00:08.884 ***** 2026-02-05 02:30:03.153655 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:03.153667 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:03.153680 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:03.153714 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:03.153728 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:03.153740 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:03.153753 | orchestrator | 2026-02-05 02:30:03.153767 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 02:30:03.153780 | orchestrator | Thursday 05 February 2026 02:29:59 +0000 (0:00:00.769) 0:00:09.653 ***** 2026-02-05 02:30:03.153794 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 02:30:03.153807 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 02:30:03.153818 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 02:30:03.153829 | orchestrator | 2026-02-05 02:30:03.153841 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 02:30:03.153852 | orchestrator | Thursday 05 February 2026 02:30:01 +0000 (0:00:02.290) 0:00:11.943 ***** 2026-02-05 02:30:03.153863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 02:30:03.153874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 02:30:03.153886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 02:30:03.153904 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:03.153921 | orchestrator | 2026-02-05 02:30:03.153939 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 02:30:03.153957 | orchestrator | Thursday 05 February 2026 02:30:02 +0000 (0:00:00.402) 0:00:12.346 ***** 2026-02-05 02:30:03.153976 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 02:30:03.153994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 02:30:03.154006 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 02:30:03.154087 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:03.154100 | orchestrator | 2026-02-05 02:30:03.154111 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 02:30:03.154122 | orchestrator | Thursday 05 February 2026 02:30:02 +0000 (0:00:00.582) 0:00:12.929 ***** 2026-02-05 02:30:03.154135 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:03.154150 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:03.154161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:03.154210 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:03.154223 | orchestrator | 2026-02-05 02:30:03.154241 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 02:30:03.154253 | orchestrator | Thursday 05 February 2026 02:30:02 +0000 (0:00:00.218) 0:00:13.147 ***** 2026-02-05 02:30:03.154278 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 02:30:00.315962', 'end': '2026-02-05 02:30:00.365720', 'delta': '0:00:00.049758', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 02:30:12.393655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 02:30:00.884273', 'end': '2026-02-05 02:30:00.929287', 'delta': '0:00:00.045014', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 02:30:12.393742 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 02:30:01.414256', 'end': '2026-02-05 02:30:01.454043', 'delta': '0:00:00.039787', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 02:30:12.393752 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.393759 | orchestrator | 2026-02-05 02:30:12.393766 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 02:30:12.393773 | orchestrator | Thursday 05 February 2026 02:30:03 +0000 (0:00:00.174) 0:00:13.322 ***** 2026-02-05 02:30:12.393779 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:12.393786 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:12.393792 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:12.393797 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:12.393803 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:12.393809 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:12.393814 | orchestrator | 2026-02-05 02:30:12.393820 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 02:30:12.393826 | orchestrator | Thursday 05 February 2026 02:30:03 +0000 (0:00:00.693) 0:00:14.016 ***** 2026-02-05 02:30:12.393832 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 02:30:12.393838 | orchestrator | 2026-02-05 02:30:12.393844 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 02:30:12.393850 | orchestrator | Thursday 05 February 2026 02:30:04 +0000 (0:00:00.655) 0:00:14.672 ***** 2026-02-05 02:30:12.393873 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.393879 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:12.393885 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:12.393891 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:12.393896 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:12.393902 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:12.393908 | orchestrator | 2026-02-05 02:30:12.393914 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 02:30:12.393920 | orchestrator | Thursday 05 February 2026 02:30:05 +0000 (0:00:00.792) 0:00:15.465 ***** 2026-02-05 02:30:12.393925 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.393931 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:12.393937 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:12.393943 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:12.393948 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:12.393954 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:12.393959 | orchestrator | 2026-02-05 02:30:12.393965 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 02:30:12.393971 | orchestrator | Thursday 05 February 2026 02:30:06 +0000 (0:00:01.192) 0:00:16.657 ***** 2026-02-05 02:30:12.393977 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.393983 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:12.393988 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:12.393994 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:12.394000 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:12.394056 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:12.394064 | orchestrator | 2026-02-05 02:30:12.394070 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 02:30:12.394076 | orchestrator | Thursday 05 February 2026 02:30:07 +0000 (0:00:00.597) 0:00:17.255 ***** 2026-02-05 02:30:12.394081 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.394087 | orchestrator | 2026-02-05 02:30:12.394093 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 02:30:12.394099 | orchestrator | Thursday 05 February 2026 02:30:07 +0000 (0:00:00.133) 0:00:17.389 ***** 2026-02-05 02:30:12.394104 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.394110 | orchestrator | 2026-02-05 02:30:12.394116 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 02:30:12.394121 | orchestrator | Thursday 05 February 2026 02:30:07 +0000 (0:00:00.222) 0:00:17.611 ***** 2026-02-05 02:30:12.394127 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.394133 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:12.394140 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:12.394150 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:12.394159 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:12.394166 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:12.394172 | orchestrator | 2026-02-05 02:30:12.394253 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 02:30:12.394261 | orchestrator | Thursday 05 February 2026 02:30:08 +0000 (0:00:00.753) 0:00:18.365 ***** 2026-02-05 02:30:12.394268 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.394275 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:12.394282 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:12.394288 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:12.394295 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:12.394302 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:12.394308 | orchestrator | 2026-02-05 02:30:12.394315 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 02:30:12.394322 | orchestrator | Thursday 05 February 2026 02:30:08 +0000 (0:00:00.587) 0:00:18.952 ***** 2026-02-05 02:30:12.394328 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.394335 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:12.394342 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:12.394355 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:12.394362 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:12.394368 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:12.394375 | orchestrator | 2026-02-05 02:30:12.394382 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 02:30:12.394388 | orchestrator | Thursday 05 February 2026 02:30:09 +0000 (0:00:00.778) 0:00:19.731 ***** 2026-02-05 02:30:12.394395 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.394402 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:12.394408 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:12.394415 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:12.394421 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:12.394428 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:12.394435 | orchestrator | 2026-02-05 02:30:12.394441 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 02:30:12.394448 | orchestrator | Thursday 05 February 2026 02:30:10 +0000 (0:00:00.601) 0:00:20.332 ***** 2026-02-05 02:30:12.394455 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.394462 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:12.394468 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:12.394475 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:12.394484 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:12.394494 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:12.394504 | orchestrator | 2026-02-05 02:30:12.394514 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 02:30:12.394525 | orchestrator | Thursday 05 February 2026 02:30:10 +0000 (0:00:00.767) 0:00:21.099 ***** 2026-02-05 02:30:12.394536 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.394548 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:12.394557 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:12.394564 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:12.394571 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:12.394578 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:12.394584 | orchestrator | 2026-02-05 02:30:12.394592 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 02:30:12.394599 | orchestrator | Thursday 05 February 2026 02:30:11 +0000 (0:00:00.571) 0:00:21.671 ***** 2026-02-05 02:30:12.394605 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.394611 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:12.394617 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:12.394623 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:12.394628 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:12.394634 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:12.394640 | orchestrator | 2026-02-05 02:30:12.394645 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 02:30:12.394651 | orchestrator | Thursday 05 February 2026 02:30:12 +0000 (0:00:00.771) 0:00:22.442 ***** 2026-02-05 02:30:12.394658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e', 'dm-uuid-LVM-gjVz64L0xYhHucIQrbSWO4IaXeskE9njVHEBOKPFjChmvGixI0fMAnchfE228jrV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.394671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567', 'dm-uuid-LVM-rm93nYJXJvDmNv1mI2i0aCOQRWUNQlkCoPPr3WLpbHMBKwrxigfqk31Pio1T8A2M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.394688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.510392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.510494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.510509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.510522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.510533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.510544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.510556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.510612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.510652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VPbbSc-FYsx-oCa5-EK96-LSd2-FMne-gw3pzp', 'scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2', 'scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.510667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-30TRfy-AcTU-PjNY-ZSvI-Ms8S-pTLw-T1Q2CW', 'scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b', 'scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.510679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625', 'dm-uuid-LVM-9Y06a2zVor1lRD1cyPlucPXWC0aPbN2JxLYAdcU08G9AXF4NeOKXZ9V1sHvTv2MQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.510704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff', 'scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.510724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.658371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c', 'dm-uuid-LVM-5TLZe1Tgo1TKM8GkjUpfN78ieh5w0ANrQNgi2dmi5diYRe7Lgm9DH3wMJKHbVGFu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.658524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.658545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.658556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.658568 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:12.658580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.658590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.658649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.658661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.658671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.658708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.658722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K9GKOz-fxxR-Pm8N-aWMy-HniX-e8kz-eif3cf', 'scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a', 'scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.658745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Pz8pQL-5OmI-WkJt-J5Qa-2PBj-Qacj-FgSo8f', 'scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930', 'scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.658764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a', 'scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.808680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.808813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be', 'dm-uuid-LVM-2cW2aDbCF7Qvd1HDyT5MPDeJBzJFIyWajOrxUSy4sPZH0JqYli0dE22RqjUl99AS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.808838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565', 'dm-uuid-LVM-vN6SqmnZs4OEgki7muUGb3CX2rpgO9JjiNwKDjdU3U6P9o8RLpsOeeot25aaAr4C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.808862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.808918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.808961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.808983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.809004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.809051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.809072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.809092 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:12.809115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.809148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.809227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s8rEz7-ppR5-3mX9-9SVK-AT2X-wlWd-qt0ARf', 'scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9', 'scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.809268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j8R0nG-W0YC-WK20-RGGA-JPgY-3scR-ZQIgrc', 'scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49', 'scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.987354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc', 'scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.987456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.987498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.987512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.987563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.987576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.987587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.987599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.987628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.987640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.987659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.987683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:12.987696 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:12.987709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.987721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:12.987738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.216904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.217010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.217021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.217028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.217048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.217074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:13.217091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:13.217100 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:13.217109 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:13.217115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.217123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.217133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.217141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.217147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.217154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.217161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.217214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:30:13.423753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:13.423943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:30:13.423962 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:13.423976 | orchestrator | 2026-02-05 02:30:13.423988 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 02:30:13.424000 | orchestrator | Thursday 05 February 2026 02:30:13 +0000 (0:00:00.944) 0:00:23.387 ***** 2026-02-05 02:30:13.424014 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e', 'dm-uuid-LVM-gjVz64L0xYhHucIQrbSWO4IaXeskE9njVHEBOKPFjChmvGixI0fMAnchfE228jrV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.424069 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567', 'dm-uuid-LVM-rm93nYJXJvDmNv1mI2i0aCOQRWUNQlkCoPPr3WLpbHMBKwrxigfqk31Pio1T8A2M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.424083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.424096 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.424114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.424126 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.424138 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.424157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.424219 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625', 'dm-uuid-LVM-9Y06a2zVor1lRD1cyPlucPXWC0aPbN2JxLYAdcU08G9AXF4NeOKXZ9V1sHvTv2MQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.475703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.475848 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.475877 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c', 'dm-uuid-LVM-5TLZe1Tgo1TKM8GkjUpfN78ieh5w0ANrQNgi2dmi5diYRe7Lgm9DH3wMJKHbVGFu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.475900 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.476075 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.476118 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VPbbSc-FYsx-oCa5-EK96-LSd2-FMne-gw3pzp', 'scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2', 'scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.476142 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.476164 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-30TRfy-AcTU-PjNY-ZSvI-Ms8S-pTLw-T1Q2CW', 'scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b', 'scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.476267 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.476303 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff', 'scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.928030 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.928151 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.928240 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.928305 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.928326 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.928342 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.928400 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.928438 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K9GKOz-fxxR-Pm8N-aWMy-HniX-e8kz-eif3cf', 'scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a', 'scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.928459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Pz8pQL-5OmI-WkJt-J5Qa-2PBj-Qacj-FgSo8f', 'scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930', 'scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.928478 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:13.928513 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a', 'scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.988055 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.988236 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be', 'dm-uuid-LVM-2cW2aDbCF7Qvd1HDyT5MPDeJBzJFIyWajOrxUSy4sPZH0JqYli0dE22RqjUl99AS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.988268 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565', 'dm-uuid-LVM-vN6SqmnZs4OEgki7muUGb3CX2rpgO9JjiNwKDjdU3U6P9o8RLpsOeeot25aaAr4C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.988289 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.988310 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.988370 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.988394 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.988426 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.988445 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:13.988469 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.988489 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.988509 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:13.988556 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.124946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s8rEz7-ppR5-3mX9-9SVK-AT2X-wlWd-qt0ARf', 'scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9', 'scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.125029 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j8R0nG-W0YC-WK20-RGGA-JPgY-3scR-ZQIgrc', 'scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49', 'scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.125037 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc', 'scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.125043 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.125074 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.125080 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.125085 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.125115 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.125122 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.125126 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.125134 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.125143 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.269309 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.269429 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.269488 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:14.269505 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.269550 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.269563 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.269575 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.269587 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.269605 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.269625 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.269636 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.269659 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.505845 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.505950 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:14.505967 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:14.505981 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.505994 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.506006 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.506072 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.506087 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.506151 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.506165 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.506207 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.506223 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:14.506258 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:30:25.572121 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:25.572243 | orchestrator | 2026-02-05 02:30:25.572258 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 02:30:25.572269 | orchestrator | Thursday 05 February 2026 02:30:14 +0000 (0:00:01.290) 0:00:24.678 ***** 2026-02-05 02:30:25.572278 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:25.572288 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:25.572297 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:25.572306 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:25.572315 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:25.572323 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:25.572332 | orchestrator | 2026-02-05 02:30:25.572341 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 02:30:25.572350 | orchestrator | Thursday 05 February 2026 02:30:15 +0000 (0:00:00.904) 0:00:25.583 ***** 2026-02-05 02:30:25.572358 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:25.572367 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:25.572376 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:25.572385 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:25.572448 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:25.572458 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:25.572467 | orchestrator | 2026-02-05 02:30:25.572476 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 02:30:25.572485 | orchestrator | Thursday 05 February 2026 02:30:16 +0000 (0:00:00.782) 0:00:26.365 ***** 2026-02-05 02:30:25.572494 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:25.572503 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:25.572512 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:25.572521 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:25.572529 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:25.572538 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:25.572547 | orchestrator | 2026-02-05 02:30:25.572556 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 02:30:25.572565 | orchestrator | Thursday 05 February 2026 02:30:16 +0000 (0:00:00.609) 0:00:26.975 ***** 2026-02-05 02:30:25.572574 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:25.572583 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:25.572592 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:25.572601 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:25.572609 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:25.572618 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:25.572627 | orchestrator | 2026-02-05 02:30:25.572636 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 02:30:25.572645 | orchestrator | Thursday 05 February 2026 02:30:17 +0000 (0:00:00.785) 0:00:27.760 ***** 2026-02-05 02:30:25.572654 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:25.572662 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:25.572671 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:25.572707 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:25.572718 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:25.572728 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:25.572737 | orchestrator | 2026-02-05 02:30:25.572748 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 02:30:25.572758 | orchestrator | Thursday 05 February 2026 02:30:18 +0000 (0:00:00.622) 0:00:28.383 ***** 2026-02-05 02:30:25.572769 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:25.572778 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:25.572789 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:25.572799 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:25.572808 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:25.572818 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:25.572829 | orchestrator | 2026-02-05 02:30:25.572839 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 02:30:25.572849 | orchestrator | Thursday 05 February 2026 02:30:18 +0000 (0:00:00.788) 0:00:29.171 ***** 2026-02-05 02:30:25.572859 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-05 02:30:25.572870 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-05 02:30:25.572880 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-05 02:30:25.572889 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-05 02:30:25.572900 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-05 02:30:25.572910 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-05 02:30:25.572920 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 02:30:25.572930 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-05 02:30:25.572940 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-05 02:30:25.572950 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-05 02:30:25.572960 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-05 02:30:25.572970 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 02:30:25.572979 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-05 02:30:25.572989 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-05 02:30:25.572999 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 02:30:25.573009 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 02:30:25.573019 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 02:30:25.573100 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-05 02:30:25.573113 | orchestrator | 2026-02-05 02:30:25.573122 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 02:30:25.573131 | orchestrator | Thursday 05 February 2026 02:30:20 +0000 (0:00:01.534) 0:00:30.705 ***** 2026-02-05 02:30:25.573139 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 02:30:25.573149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 02:30:25.573158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 02:30:25.573167 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:25.573212 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 02:30:25.573222 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 02:30:25.573231 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 02:30:25.573256 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:25.573266 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 02:30:25.573275 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 02:30:25.573283 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 02:30:25.573292 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:25.573300 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 02:30:25.573309 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 02:30:25.573327 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 02:30:25.573335 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:25.573344 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 02:30:25.573353 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 02:30:25.573361 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 02:30:25.573370 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:25.573378 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 02:30:25.573387 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 02:30:25.573395 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 02:30:25.573404 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:25.573413 | orchestrator | 2026-02-05 02:30:25.573422 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 02:30:25.573430 | orchestrator | Thursday 05 February 2026 02:30:21 +0000 (0:00:00.924) 0:00:31.630 ***** 2026-02-05 02:30:25.573439 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:25.573448 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:25.573456 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:25.573466 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:30:25.573475 | orchestrator | 2026-02-05 02:30:25.573484 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 02:30:25.573494 | orchestrator | Thursday 05 February 2026 02:30:22 +0000 (0:00:00.968) 0:00:32.599 ***** 2026-02-05 02:30:25.573503 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:25.573512 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:25.573520 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:25.573529 | orchestrator | 2026-02-05 02:30:25.573538 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 02:30:25.573546 | orchestrator | Thursday 05 February 2026 02:30:22 +0000 (0:00:00.333) 0:00:32.932 ***** 2026-02-05 02:30:25.573555 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:25.573564 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:25.573572 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:25.573581 | orchestrator | 2026-02-05 02:30:25.573590 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 02:30:25.573599 | orchestrator | Thursday 05 February 2026 02:30:23 +0000 (0:00:00.345) 0:00:33.277 ***** 2026-02-05 02:30:25.573607 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:25.573616 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:25.573625 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:25.573633 | orchestrator | 2026-02-05 02:30:25.573642 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 02:30:25.573650 | orchestrator | Thursday 05 February 2026 02:30:23 +0000 (0:00:00.311) 0:00:33.589 ***** 2026-02-05 02:30:25.573659 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:25.573668 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:25.573677 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:25.573685 | orchestrator | 2026-02-05 02:30:25.573694 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 02:30:25.573703 | orchestrator | Thursday 05 February 2026 02:30:24 +0000 (0:00:00.646) 0:00:34.236 ***** 2026-02-05 02:30:25.573711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:30:25.573720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:30:25.573729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:30:25.573737 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:25.573746 | orchestrator | 2026-02-05 02:30:25.573755 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 02:30:25.573765 | orchestrator | Thursday 05 February 2026 02:30:24 +0000 (0:00:00.387) 0:00:34.624 ***** 2026-02-05 02:30:25.573788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:30:25.573802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:30:25.573823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:30:25.573888 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:25.573903 | orchestrator | 2026-02-05 02:30:25.573916 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 02:30:25.573929 | orchestrator | Thursday 05 February 2026 02:30:24 +0000 (0:00:00.389) 0:00:35.014 ***** 2026-02-05 02:30:25.573949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:30:25.573962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:30:25.573975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:30:25.573989 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:25.574001 | orchestrator | 2026-02-05 02:30:25.574112 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 02:30:25.574136 | orchestrator | Thursday 05 February 2026 02:30:25 +0000 (0:00:00.387) 0:00:35.402 ***** 2026-02-05 02:30:25.574151 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:25.574165 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:25.574212 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:25.574222 | orchestrator | 2026-02-05 02:30:25.574231 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 02:30:25.574252 | orchestrator | Thursday 05 February 2026 02:30:25 +0000 (0:00:00.340) 0:00:35.742 ***** 2026-02-05 02:30:44.735770 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 02:30:44.735900 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 02:30:44.736044 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 02:30:44.736065 | orchestrator | 2026-02-05 02:30:44.736078 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 02:30:44.736090 | orchestrator | Thursday 05 February 2026 02:30:26 +0000 (0:00:00.730) 0:00:36.473 ***** 2026-02-05 02:30:44.736101 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 02:30:44.736113 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 02:30:44.736124 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 02:30:44.736135 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 02:30:44.736146 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 02:30:44.736162 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 02:30:44.736218 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 02:30:44.736243 | orchestrator | 2026-02-05 02:30:44.736261 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 02:30:44.736279 | orchestrator | Thursday 05 February 2026 02:30:27 +0000 (0:00:01.187) 0:00:37.661 ***** 2026-02-05 02:30:44.736298 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 02:30:44.736316 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 02:30:44.736335 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 02:30:44.736356 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 02:30:44.736374 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 02:30:44.736393 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 02:30:44.736414 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 02:30:44.736433 | orchestrator | 2026-02-05 02:30:44.736450 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 02:30:44.736494 | orchestrator | Thursday 05 February 2026 02:30:29 +0000 (0:00:01.963) 0:00:39.624 ***** 2026-02-05 02:30:44.736510 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:30:44.736524 | orchestrator | 2026-02-05 02:30:44.736535 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 02:30:44.736546 | orchestrator | Thursday 05 February 2026 02:30:30 +0000 (0:00:01.188) 0:00:40.813 ***** 2026-02-05 02:30:44.736557 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:30:44.736568 | orchestrator | 2026-02-05 02:30:44.736579 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 02:30:44.736590 | orchestrator | Thursday 05 February 2026 02:30:31 +0000 (0:00:01.244) 0:00:42.058 ***** 2026-02-05 02:30:44.736601 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:44.736612 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:44.736623 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:44.736634 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:44.736645 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:44.736655 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:44.736710 | orchestrator | 2026-02-05 02:30:44.736722 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 02:30:44.736734 | orchestrator | Thursday 05 February 2026 02:30:33 +0000 (0:00:01.242) 0:00:43.300 ***** 2026-02-05 02:30:44.736790 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:44.736803 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:44.736814 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:44.736825 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:44.736836 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:44.736846 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:44.736857 | orchestrator | 2026-02-05 02:30:44.736868 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 02:30:44.736880 | orchestrator | Thursday 05 February 2026 02:30:33 +0000 (0:00:00.689) 0:00:43.990 ***** 2026-02-05 02:30:44.736891 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:44.736901 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:44.736912 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:44.736923 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:44.736933 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:44.736944 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:44.736972 | orchestrator | 2026-02-05 02:30:44.737001 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 02:30:44.737013 | orchestrator | Thursday 05 February 2026 02:30:34 +0000 (0:00:00.817) 0:00:44.808 ***** 2026-02-05 02:30:44.737024 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:44.737035 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:44.737046 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:44.737057 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:44.737067 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:44.737078 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:44.737089 | orchestrator | 2026-02-05 02:30:44.737100 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 02:30:44.737111 | orchestrator | Thursday 05 February 2026 02:30:35 +0000 (0:00:00.712) 0:00:45.520 ***** 2026-02-05 02:30:44.737122 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:44.737133 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:44.737166 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:44.737177 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:44.737220 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:44.737231 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:44.737277 | orchestrator | 2026-02-05 02:30:44.737290 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 02:30:44.737313 | orchestrator | Thursday 05 February 2026 02:30:36 +0000 (0:00:01.244) 0:00:46.765 ***** 2026-02-05 02:30:44.737324 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:44.737335 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:44.737347 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:44.737366 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:44.737384 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:44.737404 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:44.737423 | orchestrator | 2026-02-05 02:30:44.737442 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 02:30:44.737454 | orchestrator | Thursday 05 February 2026 02:30:37 +0000 (0:00:00.584) 0:00:47.349 ***** 2026-02-05 02:30:44.737465 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:44.737476 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:44.737487 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:44.737504 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:44.737521 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:44.737548 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:44.737567 | orchestrator | 2026-02-05 02:30:44.737585 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 02:30:44.737603 | orchestrator | Thursday 05 February 2026 02:30:37 +0000 (0:00:00.792) 0:00:48.141 ***** 2026-02-05 02:30:44.737622 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:44.737642 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:44.737659 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:44.737676 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:44.737693 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:44.737711 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:44.737784 | orchestrator | 2026-02-05 02:30:44.737825 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 02:30:44.737867 | orchestrator | Thursday 05 February 2026 02:30:38 +0000 (0:00:01.022) 0:00:49.164 ***** 2026-02-05 02:30:44.737880 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:44.737891 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:44.737902 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:44.737912 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:44.737923 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:44.737934 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:44.737944 | orchestrator | 2026-02-05 02:30:44.737955 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 02:30:44.737966 | orchestrator | Thursday 05 February 2026 02:30:40 +0000 (0:00:01.256) 0:00:50.420 ***** 2026-02-05 02:30:44.737977 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:44.737988 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:44.737999 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:44.738010 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:44.738107 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:44.738127 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:44.738144 | orchestrator | 2026-02-05 02:30:44.738162 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 02:30:44.738273 | orchestrator | Thursday 05 February 2026 02:30:40 +0000 (0:00:00.583) 0:00:51.004 ***** 2026-02-05 02:30:44.738298 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:44.738317 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:44.738333 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:44.738350 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:30:44.738367 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:30:44.738383 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:30:44.738503 | orchestrator | 2026-02-05 02:30:44.738531 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 02:30:44.738550 | orchestrator | Thursday 05 February 2026 02:30:41 +0000 (0:00:00.825) 0:00:51.829 ***** 2026-02-05 02:30:44.738569 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:44.738586 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:44.738625 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:44.738643 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:44.738661 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:44.738678 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:44.738696 | orchestrator | 2026-02-05 02:30:44.738716 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 02:30:44.738734 | orchestrator | Thursday 05 February 2026 02:30:42 +0000 (0:00:00.597) 0:00:52.427 ***** 2026-02-05 02:30:44.738750 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:44.738768 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:44.738786 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:44.738856 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:44.738874 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:44.738922 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:44.738932 | orchestrator | 2026-02-05 02:30:44.738942 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 02:30:44.738951 | orchestrator | Thursday 05 February 2026 02:30:43 +0000 (0:00:00.804) 0:00:53.231 ***** 2026-02-05 02:30:44.738961 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:30:44.739004 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:30:44.739016 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:30:44.739026 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:44.739035 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:44.739056 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:44.739066 | orchestrator | 2026-02-05 02:30:44.739076 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 02:30:44.739086 | orchestrator | Thursday 05 February 2026 02:30:43 +0000 (0:00:00.592) 0:00:53.824 ***** 2026-02-05 02:30:44.739122 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:44.739156 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:30:44.739167 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:30:44.739177 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:30:44.739221 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:30:44.739231 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:30:44.739241 | orchestrator | 2026-02-05 02:30:44.739251 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 02:30:44.739261 | orchestrator | Thursday 05 February 2026 02:30:44 +0000 (0:00:00.800) 0:00:54.624 ***** 2026-02-05 02:30:44.739270 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:30:44.739298 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:31:53.239939 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:31:53.240073 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:31:53.240100 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:31:53.240118 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:31:53.240138 | orchestrator | 2026-02-05 02:31:53.240159 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 02:31:53.240181 | orchestrator | Thursday 05 February 2026 02:30:45 +0000 (0:00:00.614) 0:00:55.239 ***** 2026-02-05 02:31:53.240234 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:31:53.240248 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:31:53.240259 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:31:53.240270 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:31:53.240282 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:31:53.240293 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:31:53.240304 | orchestrator | 2026-02-05 02:31:53.240315 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 02:31:53.240327 | orchestrator | Thursday 05 February 2026 02:30:45 +0000 (0:00:00.813) 0:00:56.052 ***** 2026-02-05 02:31:53.240338 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:31:53.240349 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:31:53.240360 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:31:53.240370 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:31:53.240381 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:31:53.240392 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:31:53.240439 | orchestrator | 2026-02-05 02:31:53.240454 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 02:31:53.240466 | orchestrator | Thursday 05 February 2026 02:30:46 +0000 (0:00:00.608) 0:00:56.662 ***** 2026-02-05 02:31:53.240479 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:31:53.240492 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:31:53.240504 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:31:53.240516 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:31:53.240529 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:31:53.240542 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:31:53.240555 | orchestrator | 2026-02-05 02:31:53.240568 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 02:31:53.240580 | orchestrator | Thursday 05 February 2026 02:30:47 +0000 (0:00:01.121) 0:00:57.783 ***** 2026-02-05 02:31:53.240593 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:31:53.240606 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:31:53.240619 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:31:53.240631 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:31:53.240643 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:31:53.240655 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:31:53.240667 | orchestrator | 2026-02-05 02:31:53.240681 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 02:31:53.240693 | orchestrator | Thursday 05 February 2026 02:30:49 +0000 (0:00:01.497) 0:00:59.281 ***** 2026-02-05 02:31:53.240706 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:31:53.240719 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:31:53.240732 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:31:53.240745 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:31:53.240758 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:31:53.240770 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:31:53.240783 | orchestrator | 2026-02-05 02:31:53.240796 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 02:31:53.240808 | orchestrator | Thursday 05 February 2026 02:30:51 +0000 (0:00:02.113) 0:01:01.394 ***** 2026-02-05 02:31:53.240820 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:31:53.240833 | orchestrator | 2026-02-05 02:31:53.240844 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 02:31:53.240854 | orchestrator | Thursday 05 February 2026 02:30:52 +0000 (0:00:01.088) 0:01:02.482 ***** 2026-02-05 02:31:53.240865 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:31:53.240876 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:31:53.240891 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:31:53.240910 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:31:53.240927 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:31:53.240945 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:31:53.240961 | orchestrator | 2026-02-05 02:31:53.240979 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 02:31:53.241020 | orchestrator | Thursday 05 February 2026 02:30:53 +0000 (0:00:00.776) 0:01:03.259 ***** 2026-02-05 02:31:53.241041 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:31:53.241062 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:31:53.241081 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:31:53.241096 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:31:53.241107 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:31:53.241118 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:31:53.241129 | orchestrator | 2026-02-05 02:31:53.241140 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 02:31:53.241150 | orchestrator | Thursday 05 February 2026 02:30:53 +0000 (0:00:00.591) 0:01:03.851 ***** 2026-02-05 02:31:53.241161 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 02:31:53.241188 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 02:31:53.241239 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 02:31:53.241250 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 02:31:53.241262 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 02:31:53.241273 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 02:31:53.241284 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 02:31:53.241295 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 02:31:53.241306 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 02:31:53.241336 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 02:31:53.241348 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 02:31:53.241359 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 02:31:53.241370 | orchestrator | 2026-02-05 02:31:53.241381 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 02:31:53.241392 | orchestrator | Thursday 05 February 2026 02:30:55 +0000 (0:00:01.547) 0:01:05.398 ***** 2026-02-05 02:31:53.241402 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:31:53.241413 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:31:53.241424 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:31:53.241435 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:31:53.241445 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:31:53.241456 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:31:53.241467 | orchestrator | 2026-02-05 02:31:53.241478 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 02:31:53.241489 | orchestrator | Thursday 05 February 2026 02:30:56 +0000 (0:00:00.947) 0:01:06.346 ***** 2026-02-05 02:31:53.241499 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:31:53.241510 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:31:53.241521 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:31:53.241532 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:31:53.241542 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:31:53.241553 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:31:53.241563 | orchestrator | 2026-02-05 02:31:53.241574 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 02:31:53.241585 | orchestrator | Thursday 05 February 2026 02:30:56 +0000 (0:00:00.809) 0:01:07.156 ***** 2026-02-05 02:31:53.241596 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:31:53.241607 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:31:53.241618 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:31:53.241628 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:31:53.241639 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:31:53.241650 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:31:53.241661 | orchestrator | 2026-02-05 02:31:53.241671 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 02:31:53.241682 | orchestrator | Thursday 05 February 2026 02:30:57 +0000 (0:00:00.652) 0:01:07.808 ***** 2026-02-05 02:31:53.241693 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:31:53.241704 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:31:53.241715 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:31:53.241725 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:31:53.241736 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:31:53.241747 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:31:53.241758 | orchestrator | 2026-02-05 02:31:53.241768 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 02:31:53.241779 | orchestrator | Thursday 05 February 2026 02:30:58 +0000 (0:00:00.794) 0:01:08.603 ***** 2026-02-05 02:31:53.241797 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:31:53.241808 | orchestrator | 2026-02-05 02:31:53.241819 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 02:31:53.241830 | orchestrator | Thursday 05 February 2026 02:30:59 +0000 (0:00:01.246) 0:01:09.850 ***** 2026-02-05 02:31:53.241841 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:31:53.241852 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:31:53.241863 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:31:53.241873 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:31:53.241884 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:31:53.241895 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:31:53.241905 | orchestrator | 2026-02-05 02:31:53.241916 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 02:31:53.241927 | orchestrator | Thursday 05 February 2026 02:31:52 +0000 (0:00:52.671) 0:02:02.522 ***** 2026-02-05 02:31:53.241938 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 02:31:53.241949 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 02:31:53.241960 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 02:31:53.241971 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:31:53.241982 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 02:31:53.241993 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 02:31:53.242004 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 02:31:53.242014 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:31:53.242093 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 02:31:53.242104 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 02:31:53.242121 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 02:31:53.242132 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:31:53.242144 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 02:31:53.242154 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 02:31:53.242165 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 02:31:53.242176 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:31:53.242187 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 02:31:53.242252 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 02:31:53.242272 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 02:31:53.242299 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.872972 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 02:32:15.873153 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 02:32:15.873176 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 02:32:15.873188 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.873232 | orchestrator | 2026-02-05 02:32:15.873246 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 02:32:15.873258 | orchestrator | Thursday 05 February 2026 02:31:53 +0000 (0:00:00.889) 0:02:03.411 ***** 2026-02-05 02:32:15.873269 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.873280 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:15.873291 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:15.873303 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:15.873313 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.873349 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.873361 | orchestrator | 2026-02-05 02:32:15.873372 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 02:32:15.873383 | orchestrator | Thursday 05 February 2026 02:31:53 +0000 (0:00:00.573) 0:02:03.985 ***** 2026-02-05 02:32:15.873394 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.873405 | orchestrator | 2026-02-05 02:32:15.873416 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 02:32:15.873427 | orchestrator | Thursday 05 February 2026 02:31:53 +0000 (0:00:00.152) 0:02:04.138 ***** 2026-02-05 02:32:15.873438 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.873449 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:15.873460 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:15.873471 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:15.873482 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.873493 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.873504 | orchestrator | 2026-02-05 02:32:15.873518 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 02:32:15.873531 | orchestrator | Thursday 05 February 2026 02:31:54 +0000 (0:00:00.821) 0:02:04.960 ***** 2026-02-05 02:32:15.873544 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.873557 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:15.873569 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:15.873596 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:15.873609 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.873633 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.873646 | orchestrator | 2026-02-05 02:32:15.873659 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 02:32:15.873672 | orchestrator | Thursday 05 February 2026 02:31:55 +0000 (0:00:00.591) 0:02:05.551 ***** 2026-02-05 02:32:15.873685 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.873698 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:15.873711 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:15.873725 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:15.873737 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.873750 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.873763 | orchestrator | 2026-02-05 02:32:15.873776 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 02:32:15.873789 | orchestrator | Thursday 05 February 2026 02:31:56 +0000 (0:00:00.822) 0:02:06.373 ***** 2026-02-05 02:32:15.873841 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:32:15.873856 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:32:15.873869 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:32:15.873882 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:32:15.873895 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:32:15.873906 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:32:15.873938 | orchestrator | 2026-02-05 02:32:15.873950 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 02:32:15.873982 | orchestrator | Thursday 05 February 2026 02:31:59 +0000 (0:00:03.203) 0:02:09.577 ***** 2026-02-05 02:32:15.873995 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:32:15.874006 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:32:15.874069 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:32:15.874081 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:32:15.874092 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:32:15.874103 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:32:15.874114 | orchestrator | 2026-02-05 02:32:15.874125 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 02:32:15.874136 | orchestrator | Thursday 05 February 2026 02:32:00 +0000 (0:00:00.813) 0:02:10.390 ***** 2026-02-05 02:32:15.874148 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:32:15.874161 | orchestrator | 2026-02-05 02:32:15.874172 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 02:32:15.874237 | orchestrator | Thursday 05 February 2026 02:32:01 +0000 (0:00:01.250) 0:02:11.640 ***** 2026-02-05 02:32:15.874251 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.874262 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:15.874273 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:15.874284 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:15.874310 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.874322 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.874332 | orchestrator | 2026-02-05 02:32:15.874343 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 02:32:15.874354 | orchestrator | Thursday 05 February 2026 02:32:02 +0000 (0:00:00.644) 0:02:12.285 ***** 2026-02-05 02:32:15.874365 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.874376 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:15.874386 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:15.874397 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:15.874408 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.874418 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.874429 | orchestrator | 2026-02-05 02:32:15.874440 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 02:32:15.874451 | orchestrator | Thursday 05 February 2026 02:32:03 +0000 (0:00:00.900) 0:02:13.186 ***** 2026-02-05 02:32:15.874462 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.874494 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:15.874506 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:15.874517 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:15.874527 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.874538 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.874549 | orchestrator | 2026-02-05 02:32:15.874560 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 02:32:15.874571 | orchestrator | Thursday 05 February 2026 02:32:03 +0000 (0:00:00.786) 0:02:13.972 ***** 2026-02-05 02:32:15.874582 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.874593 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:15.874604 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:15.874614 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:15.874625 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.874636 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.874646 | orchestrator | 2026-02-05 02:32:15.874657 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 02:32:15.874668 | orchestrator | Thursday 05 February 2026 02:32:04 +0000 (0:00:00.603) 0:02:14.576 ***** 2026-02-05 02:32:15.874679 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.874689 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:15.874700 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:15.874711 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:15.874721 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.874732 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.874743 | orchestrator | 2026-02-05 02:32:15.874754 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 02:32:15.874764 | orchestrator | Thursday 05 February 2026 02:32:05 +0000 (0:00:00.818) 0:02:15.394 ***** 2026-02-05 02:32:15.874775 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.874786 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:15.874796 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:15.874807 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:15.874818 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.874828 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.874839 | orchestrator | 2026-02-05 02:32:15.874850 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 02:32:15.874861 | orchestrator | Thursday 05 February 2026 02:32:05 +0000 (0:00:00.614) 0:02:16.009 ***** 2026-02-05 02:32:15.874880 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.874891 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:15.874901 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:15.874912 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:15.874923 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.874933 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.874944 | orchestrator | 2026-02-05 02:32:15.874955 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 02:32:15.874966 | orchestrator | Thursday 05 February 2026 02:32:06 +0000 (0:00:00.784) 0:02:16.793 ***** 2026-02-05 02:32:15.874977 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:15.874987 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:15.874998 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:15.875009 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:15.875020 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:15.875030 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:15.875041 | orchestrator | 2026-02-05 02:32:15.875052 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 02:32:15.875062 | orchestrator | Thursday 05 February 2026 02:32:07 +0000 (0:00:00.619) 0:02:17.412 ***** 2026-02-05 02:32:15.875074 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:32:15.875084 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:32:15.875095 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:32:15.875106 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:32:15.875117 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:32:15.875127 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:32:15.875138 | orchestrator | 2026-02-05 02:32:15.875149 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 02:32:15.875160 | orchestrator | Thursday 05 February 2026 02:32:08 +0000 (0:00:01.258) 0:02:18.671 ***** 2026-02-05 02:32:15.875171 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:32:15.875183 | orchestrator | 2026-02-05 02:32:15.875194 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 02:32:15.875277 | orchestrator | Thursday 05 February 2026 02:32:09 +0000 (0:00:01.222) 0:02:19.894 ***** 2026-02-05 02:32:15.875297 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-05 02:32:15.875314 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-05 02:32:15.875326 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-05 02:32:15.875337 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-05 02:32:15.875348 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-05 02:32:15.875358 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-05 02:32:15.875369 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-05 02:32:15.875386 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-05 02:32:15.875397 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-05 02:32:15.875408 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-05 02:32:15.875419 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-05 02:32:15.875429 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-05 02:32:15.875440 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-05 02:32:15.875451 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-05 02:32:15.875462 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-05 02:32:15.875473 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-05 02:32:15.875484 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-05 02:32:15.875502 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-05 02:32:20.980401 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-05 02:32:20.980554 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-05 02:32:20.980581 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-05 02:32:20.980600 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-05 02:32:20.980618 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-05 02:32:20.980637 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-05 02:32:20.980655 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-05 02:32:20.980673 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-05 02:32:20.980692 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-05 02:32:20.980710 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-05 02:32:20.980729 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-05 02:32:20.980748 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-05 02:32:20.980767 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-05 02:32:20.980785 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-05 02:32:20.980804 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-05 02:32:20.980822 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-05 02:32:20.980841 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-05 02:32:20.980861 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-05 02:32:20.980879 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-05 02:32:20.980897 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-05 02:32:20.980917 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-05 02:32:20.980938 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-05 02:32:20.980957 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-05 02:32:20.980974 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-05 02:32:20.980987 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-05 02:32:20.981000 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-05 02:32:20.981013 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-05 02:32:20.981026 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 02:32:20.981039 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-05 02:32:20.981051 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 02:32:20.981064 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-05 02:32:20.981076 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-05 02:32:20.981088 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 02:32:20.981101 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 02:32:20.981113 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 02:32:20.981126 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 02:32:20.981138 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 02:32:20.981152 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 02:32:20.981164 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 02:32:20.981176 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 02:32:20.981189 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 02:32:20.981257 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 02:32:20.981272 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 02:32:20.981325 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 02:32:20.981336 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 02:32:20.981347 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 02:32:20.981358 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 02:32:20.981369 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 02:32:20.981380 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 02:32:20.981405 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 02:32:20.981417 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 02:32:20.981427 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 02:32:20.981438 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 02:32:20.981449 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 02:32:20.981460 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 02:32:20.981471 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 02:32:20.981482 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 02:32:20.981493 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 02:32:20.981525 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 02:32:20.981537 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 02:32:20.981548 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 02:32:20.981559 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 02:32:20.981570 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 02:32:20.981581 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-05 02:32:20.981592 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 02:32:20.981603 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-05 02:32:20.981614 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 02:32:20.981625 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-05 02:32:20.981636 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 02:32:20.981647 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-05 02:32:20.981658 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-05 02:32:20.981669 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-05 02:32:20.981679 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-05 02:32:20.981690 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-05 02:32:20.981701 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-05 02:32:20.981712 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-05 02:32:20.981722 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-05 02:32:20.981733 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-05 02:32:20.981744 | orchestrator | 2026-02-05 02:32:20.981756 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 02:32:20.981767 | orchestrator | Thursday 05 February 2026 02:32:15 +0000 (0:00:06.140) 0:02:26.034 ***** 2026-02-05 02:32:20.981777 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:20.981788 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:20.981799 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:20.981811 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:32:20.981832 | orchestrator | 2026-02-05 02:32:20.981843 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-05 02:32:20.981854 | orchestrator | Thursday 05 February 2026 02:32:16 +0000 (0:00:00.840) 0:02:26.874 ***** 2026-02-05 02:32:20.981865 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 02:32:20.981877 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 02:32:20.981888 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 02:32:20.981898 | orchestrator | 2026-02-05 02:32:20.981909 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-05 02:32:20.981920 | orchestrator | Thursday 05 February 2026 02:32:17 +0000 (0:00:00.893) 0:02:27.768 ***** 2026-02-05 02:32:20.981931 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 02:32:20.981942 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 02:32:20.981953 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 02:32:20.981964 | orchestrator | 2026-02-05 02:32:20.981975 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 02:32:20.981986 | orchestrator | Thursday 05 February 2026 02:32:18 +0000 (0:00:01.324) 0:02:29.093 ***** 2026-02-05 02:32:20.981997 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:32:20.982008 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:32:20.982074 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:32:20.982085 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:20.982096 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:20.982107 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:20.982118 | orchestrator | 2026-02-05 02:32:20.982129 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 02:32:20.982146 | orchestrator | Thursday 05 February 2026 02:32:19 +0000 (0:00:00.594) 0:02:29.687 ***** 2026-02-05 02:32:20.982157 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:32:20.982168 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:32:20.982179 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:32:20.982190 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:20.982222 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:20.982235 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:20.982246 | orchestrator | 2026-02-05 02:32:20.982257 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 02:32:20.982268 | orchestrator | Thursday 05 February 2026 02:32:20 +0000 (0:00:00.867) 0:02:30.554 ***** 2026-02-05 02:32:20.982278 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:20.982289 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:20.982300 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:20.982311 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:20.982322 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:20.982333 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:20.982343 | orchestrator | 2026-02-05 02:32:20.982364 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 02:32:34.044932 | orchestrator | Thursday 05 February 2026 02:32:20 +0000 (0:00:00.598) 0:02:31.153 ***** 2026-02-05 02:32:34.045075 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:34.045094 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:34.045107 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:34.045122 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.045134 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.045147 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.045196 | orchestrator | 2026-02-05 02:32:34.045484 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 02:32:34.045514 | orchestrator | Thursday 05 February 2026 02:32:21 +0000 (0:00:00.870) 0:02:32.024 ***** 2026-02-05 02:32:34.045532 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:34.045550 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:34.045572 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:34.045585 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.045604 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.045618 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.045632 | orchestrator | 2026-02-05 02:32:34.045645 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 02:32:34.045676 | orchestrator | Thursday 05 February 2026 02:32:22 +0000 (0:00:00.575) 0:02:32.600 ***** 2026-02-05 02:32:34.045688 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:34.045699 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:34.046369 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:34.046422 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.046441 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.046455 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.046471 | orchestrator | 2026-02-05 02:32:34.046499 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 02:32:34.046512 | orchestrator | Thursday 05 February 2026 02:32:23 +0000 (0:00:00.809) 0:02:33.409 ***** 2026-02-05 02:32:34.046524 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:34.046537 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:34.046549 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:34.046563 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.046575 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.046587 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.046603 | orchestrator | 2026-02-05 02:32:34.046616 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 02:32:34.046631 | orchestrator | Thursday 05 February 2026 02:32:23 +0000 (0:00:00.595) 0:02:34.004 ***** 2026-02-05 02:32:34.046645 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:34.046660 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:34.046675 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:34.046690 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.046703 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.046716 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.046729 | orchestrator | 2026-02-05 02:32:34.046743 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 02:32:34.046756 | orchestrator | Thursday 05 February 2026 02:32:24 +0000 (0:00:00.794) 0:02:34.799 ***** 2026-02-05 02:32:34.046768 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.046792 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.046805 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.046818 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:32:34.046831 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:32:34.046845 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:32:34.046857 | orchestrator | 2026-02-05 02:32:34.046869 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 02:32:34.046883 | orchestrator | Thursday 05 February 2026 02:32:27 +0000 (0:00:02.736) 0:02:37.535 ***** 2026-02-05 02:32:34.046895 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:32:34.046907 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:32:34.046920 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:32:34.046934 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.046949 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.046961 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.046980 | orchestrator | 2026-02-05 02:32:34.046992 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 02:32:34.047024 | orchestrator | Thursday 05 February 2026 02:32:28 +0000 (0:00:00.838) 0:02:38.374 ***** 2026-02-05 02:32:34.047039 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:32:34.047053 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:32:34.047064 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:32:34.047078 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.047093 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.047105 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.047118 | orchestrator | 2026-02-05 02:32:34.047134 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 02:32:34.047154 | orchestrator | Thursday 05 February 2026 02:32:28 +0000 (0:00:00.645) 0:02:39.019 ***** 2026-02-05 02:32:34.047166 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:34.047179 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:34.047227 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:34.047241 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.047255 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.047266 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.047280 | orchestrator | 2026-02-05 02:32:34.047293 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 02:32:34.047307 | orchestrator | Thursday 05 February 2026 02:32:29 +0000 (0:00:00.836) 0:02:39.855 ***** 2026-02-05 02:32:34.047319 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 02:32:34.047333 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 02:32:34.047346 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 02:32:34.047358 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.047398 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.047411 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.047423 | orchestrator | 2026-02-05 02:32:34.047435 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 02:32:34.047446 | orchestrator | Thursday 05 February 2026 02:32:30 +0000 (0:00:00.803) 0:02:40.659 ***** 2026-02-05 02:32:34.047461 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-05 02:32:34.047476 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-05 02:32:34.047488 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:34.047500 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-05 02:32:34.047511 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-05 02:32:34.047525 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:34.047538 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-05 02:32:34.047561 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-05 02:32:34.047574 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:34.047588 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.047599 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.047613 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.047624 | orchestrator | 2026-02-05 02:32:34.047636 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 02:32:34.047648 | orchestrator | Thursday 05 February 2026 02:32:31 +0000 (0:00:00.643) 0:02:41.302 ***** 2026-02-05 02:32:34.047662 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:34.047673 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:34.047685 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:34.047696 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.047707 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.047719 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.047729 | orchestrator | 2026-02-05 02:32:34.047742 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 02:32:34.047754 | orchestrator | Thursday 05 February 2026 02:32:31 +0000 (0:00:00.815) 0:02:42.118 ***** 2026-02-05 02:32:34.047767 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:34.047780 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:34.047792 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:34.047802 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.047814 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.047827 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.047839 | orchestrator | 2026-02-05 02:32:34.047852 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 02:32:34.047865 | orchestrator | Thursday 05 February 2026 02:32:32 +0000 (0:00:00.612) 0:02:42.730 ***** 2026-02-05 02:32:34.047882 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:34.047905 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:34.047918 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:34.047929 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.047943 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.047954 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.047964 | orchestrator | 2026-02-05 02:32:34.047977 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 02:32:34.047987 | orchestrator | Thursday 05 February 2026 02:32:33 +0000 (0:00:00.826) 0:02:43.556 ***** 2026-02-05 02:32:34.048001 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:34.048015 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:34.048026 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:34.048036 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:34.048049 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:34.048075 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:34.048092 | orchestrator | 2026-02-05 02:32:34.048103 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 02:32:34.048135 | orchestrator | Thursday 05 February 2026 02:32:34 +0000 (0:00:00.659) 0:02:44.216 ***** 2026-02-05 02:32:52.455970 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.456107 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:52.456125 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:52.456137 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:52.456149 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:52.456160 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:52.456198 | orchestrator | 2026-02-05 02:32:52.456211 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 02:32:52.456223 | orchestrator | Thursday 05 February 2026 02:32:34 +0000 (0:00:00.943) 0:02:45.159 ***** 2026-02-05 02:32:52.456270 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:32:52.456287 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:32:52.456298 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:52.456309 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:32:52.456320 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:52.456331 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:52.456342 | orchestrator | 2026-02-05 02:32:52.456353 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 02:32:52.456364 | orchestrator | Thursday 05 February 2026 02:32:35 +0000 (0:00:00.676) 0:02:45.836 ***** 2026-02-05 02:32:52.456376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:32:52.456388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:32:52.456398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:32:52.456410 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.456421 | orchestrator | 2026-02-05 02:32:52.456433 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 02:32:52.456444 | orchestrator | Thursday 05 February 2026 02:32:36 +0000 (0:00:00.472) 0:02:46.309 ***** 2026-02-05 02:32:52.456455 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:32:52.456466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:32:52.456477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:32:52.456487 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.456498 | orchestrator | 2026-02-05 02:32:52.456509 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 02:32:52.456520 | orchestrator | Thursday 05 February 2026 02:32:36 +0000 (0:00:00.528) 0:02:46.838 ***** 2026-02-05 02:32:52.456531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:32:52.456542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:32:52.456553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:32:52.456563 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.456574 | orchestrator | 2026-02-05 02:32:52.456585 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 02:32:52.456596 | orchestrator | Thursday 05 February 2026 02:32:37 +0000 (0:00:00.404) 0:02:47.242 ***** 2026-02-05 02:32:52.456607 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:32:52.456618 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:32:52.456629 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:32:52.456640 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:52.456651 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:52.456662 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:52.456673 | orchestrator | 2026-02-05 02:32:52.456684 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 02:32:52.456694 | orchestrator | Thursday 05 February 2026 02:32:37 +0000 (0:00:00.899) 0:02:48.142 ***** 2026-02-05 02:32:52.456705 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 02:32:52.456716 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 02:32:52.456727 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-05 02:32:52.456738 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 02:32:52.456749 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:52.456760 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-05 02:32:52.456771 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:52.456782 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-05 02:32:52.456793 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:52.456823 | orchestrator | 2026-02-05 02:32:52.456835 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 02:32:52.456854 | orchestrator | Thursday 05 February 2026 02:32:39 +0000 (0:00:01.722) 0:02:49.864 ***** 2026-02-05 02:32:52.456878 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:32:52.456889 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:32:52.456900 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:32:52.456911 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:32:52.456922 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:32:52.456932 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:32:52.456943 | orchestrator | 2026-02-05 02:32:52.456954 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 02:32:52.456965 | orchestrator | Thursday 05 February 2026 02:32:42 +0000 (0:00:02.413) 0:02:52.278 ***** 2026-02-05 02:32:52.456976 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:32:52.457001 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:32:52.457013 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:32:52.457024 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:32:52.457035 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:32:52.457045 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:32:52.457056 | orchestrator | 2026-02-05 02:32:52.457067 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-05 02:32:52.457078 | orchestrator | Thursday 05 February 2026 02:32:43 +0000 (0:00:01.322) 0:02:53.601 ***** 2026-02-05 02:32:52.457089 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.457100 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:52.457110 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:52.457122 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:32:52.457133 | orchestrator | 2026-02-05 02:32:52.457144 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-05 02:32:52.457155 | orchestrator | Thursday 05 February 2026 02:32:44 +0000 (0:00:01.054) 0:02:54.656 ***** 2026-02-05 02:32:52.457166 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:32:52.457197 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:32:52.457208 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:32:52.457220 | orchestrator | 2026-02-05 02:32:52.457284 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-05 02:32:52.457296 | orchestrator | Thursday 05 February 2026 02:32:44 +0000 (0:00:00.341) 0:02:54.997 ***** 2026-02-05 02:32:52.457307 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:32:52.457318 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:32:52.457329 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:32:52.457340 | orchestrator | 2026-02-05 02:32:52.457351 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-05 02:32:52.457362 | orchestrator | Thursday 05 February 2026 02:32:46 +0000 (0:00:01.245) 0:02:56.243 ***** 2026-02-05 02:32:52.457372 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 02:32:52.457383 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 02:32:52.457394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 02:32:52.457405 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:52.457416 | orchestrator | 2026-02-05 02:32:52.457426 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-05 02:32:52.457437 | orchestrator | Thursday 05 February 2026 02:32:46 +0000 (0:00:00.867) 0:02:57.110 ***** 2026-02-05 02:32:52.457448 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:32:52.457459 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:32:52.457470 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:32:52.457481 | orchestrator | 2026-02-05 02:32:52.457492 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-05 02:32:52.457503 | orchestrator | Thursday 05 February 2026 02:32:47 +0000 (0:00:00.569) 0:02:57.680 ***** 2026-02-05 02:32:52.457513 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:32:52.457524 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:32:52.457535 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:32:52.457554 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:32:52.457565 | orchestrator | 2026-02-05 02:32:52.457576 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-05 02:32:52.457587 | orchestrator | Thursday 05 February 2026 02:32:48 +0000 (0:00:00.904) 0:02:58.584 ***** 2026-02-05 02:32:52.457598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:32:52.457609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:32:52.457619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:32:52.457630 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.457641 | orchestrator | 2026-02-05 02:32:52.457652 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-05 02:32:52.457663 | orchestrator | Thursday 05 February 2026 02:32:49 +0000 (0:00:00.663) 0:02:59.248 ***** 2026-02-05 02:32:52.457696 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.457714 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:52.457734 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:52.457754 | orchestrator | 2026-02-05 02:32:52.457774 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-05 02:32:52.457794 | orchestrator | Thursday 05 February 2026 02:32:49 +0000 (0:00:00.540) 0:02:59.789 ***** 2026-02-05 02:32:52.457814 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.457834 | orchestrator | 2026-02-05 02:32:52.457855 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-05 02:32:52.457876 | orchestrator | Thursday 05 February 2026 02:32:49 +0000 (0:00:00.253) 0:03:00.042 ***** 2026-02-05 02:32:52.457896 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.457917 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:32:52.457934 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:32:52.457945 | orchestrator | 2026-02-05 02:32:52.457956 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-05 02:32:52.457967 | orchestrator | Thursday 05 February 2026 02:32:50 +0000 (0:00:00.358) 0:03:00.401 ***** 2026-02-05 02:32:52.457978 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.457988 | orchestrator | 2026-02-05 02:32:52.457999 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-05 02:32:52.458010 | orchestrator | Thursday 05 February 2026 02:32:50 +0000 (0:00:00.238) 0:03:00.639 ***** 2026-02-05 02:32:52.458092 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.458104 | orchestrator | 2026-02-05 02:32:52.458115 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-05 02:32:52.458126 | orchestrator | Thursday 05 February 2026 02:32:50 +0000 (0:00:00.227) 0:03:00.867 ***** 2026-02-05 02:32:52.458137 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.458148 | orchestrator | 2026-02-05 02:32:52.458159 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-05 02:32:52.458169 | orchestrator | Thursday 05 February 2026 02:32:50 +0000 (0:00:00.133) 0:03:01.000 ***** 2026-02-05 02:32:52.458188 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.458199 | orchestrator | 2026-02-05 02:32:52.458210 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-05 02:32:52.458221 | orchestrator | Thursday 05 February 2026 02:32:51 +0000 (0:00:00.252) 0:03:01.252 ***** 2026-02-05 02:32:52.458232 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.458277 | orchestrator | 2026-02-05 02:32:52.458296 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-05 02:32:52.458315 | orchestrator | Thursday 05 February 2026 02:32:51 +0000 (0:00:00.256) 0:03:01.509 ***** 2026-02-05 02:32:52.458333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:32:52.458351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:32:52.458363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:32:52.458384 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:32:52.458396 | orchestrator | 2026-02-05 02:32:52.458407 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-05 02:32:52.458418 | orchestrator | Thursday 05 February 2026 02:32:52 +0000 (0:00:00.920) 0:03:02.430 ***** 2026-02-05 02:32:52.458441 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:33:10.711636 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:33:10.711772 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:33:10.711795 | orchestrator | 2026-02-05 02:33:10.711815 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-05 02:33:10.711834 | orchestrator | Thursday 05 February 2026 02:32:52 +0000 (0:00:00.337) 0:03:02.767 ***** 2026-02-05 02:33:10.711850 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:33:10.711867 | orchestrator | 2026-02-05 02:33:10.711884 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-05 02:33:10.711901 | orchestrator | Thursday 05 February 2026 02:32:52 +0000 (0:00:00.260) 0:03:03.027 ***** 2026-02-05 02:33:10.711919 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:33:10.711936 | orchestrator | 2026-02-05 02:33:10.711952 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-05 02:33:10.711969 | orchestrator | Thursday 05 February 2026 02:32:53 +0000 (0:00:00.231) 0:03:03.259 ***** 2026-02-05 02:33:10.711987 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:10.712003 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:10.712021 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:10.712039 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:33:10.712055 | orchestrator | 2026-02-05 02:33:10.712073 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-05 02:33:10.712090 | orchestrator | Thursday 05 February 2026 02:32:54 +0000 (0:00:01.096) 0:03:04.355 ***** 2026-02-05 02:33:10.712107 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:33:10.712126 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:33:10.712143 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:33:10.712160 | orchestrator | 2026-02-05 02:33:10.712178 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-05 02:33:10.712198 | orchestrator | Thursday 05 February 2026 02:32:54 +0000 (0:00:00.342) 0:03:04.698 ***** 2026-02-05 02:33:10.712218 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:33:10.712237 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:33:10.712257 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:33:10.712302 | orchestrator | 2026-02-05 02:33:10.712322 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-05 02:33:10.712342 | orchestrator | Thursday 05 February 2026 02:32:55 +0000 (0:00:01.206) 0:03:05.904 ***** 2026-02-05 02:33:10.712361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:33:10.712380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:33:10.712400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:33:10.712420 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:33:10.712440 | orchestrator | 2026-02-05 02:33:10.712459 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-05 02:33:10.712473 | orchestrator | Thursday 05 February 2026 02:32:56 +0000 (0:00:00.858) 0:03:06.762 ***** 2026-02-05 02:33:10.712486 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:33:10.712499 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:33:10.712511 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:33:10.712524 | orchestrator | 2026-02-05 02:33:10.712538 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-05 02:33:10.712550 | orchestrator | Thursday 05 February 2026 02:32:57 +0000 (0:00:00.603) 0:03:07.366 ***** 2026-02-05 02:33:10.712561 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:10.712572 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:10.712583 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:10.712622 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:33:10.712634 | orchestrator | 2026-02-05 02:33:10.712644 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-05 02:33:10.712655 | orchestrator | Thursday 05 February 2026 02:32:58 +0000 (0:00:00.855) 0:03:08.222 ***** 2026-02-05 02:33:10.712666 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:33:10.712677 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:33:10.712688 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:33:10.712699 | orchestrator | 2026-02-05 02:33:10.712710 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-05 02:33:10.712721 | orchestrator | Thursday 05 February 2026 02:32:58 +0000 (0:00:00.563) 0:03:08.785 ***** 2026-02-05 02:33:10.712731 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:33:10.712744 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:33:10.712763 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:33:10.712790 | orchestrator | 2026-02-05 02:33:10.712810 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-05 02:33:10.712828 | orchestrator | Thursday 05 February 2026 02:32:59 +0000 (0:00:01.238) 0:03:10.024 ***** 2026-02-05 02:33:10.712846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:33:10.712863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:33:10.712899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:33:10.712918 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:33:10.712936 | orchestrator | 2026-02-05 02:33:10.712955 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-05 02:33:10.712972 | orchestrator | Thursday 05 February 2026 02:33:00 +0000 (0:00:00.657) 0:03:10.681 ***** 2026-02-05 02:33:10.712989 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:33:10.713006 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:33:10.713023 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:33:10.713039 | orchestrator | 2026-02-05 02:33:10.713058 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-05 02:33:10.713075 | orchestrator | Thursday 05 February 2026 02:33:00 +0000 (0:00:00.352) 0:03:11.034 ***** 2026-02-05 02:33:10.713094 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:33:10.713112 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:33:10.713128 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:33:10.713147 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:10.713166 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:10.713184 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:10.713202 | orchestrator | 2026-02-05 02:33:10.713240 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-05 02:33:10.713252 | orchestrator | Thursday 05 February 2026 02:33:01 +0000 (0:00:00.875) 0:03:11.909 ***** 2026-02-05 02:33:10.713292 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:33:10.713305 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:33:10.713316 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:33:10.713327 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:33:10.713338 | orchestrator | 2026-02-05 02:33:10.713349 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-05 02:33:10.713360 | orchestrator | Thursday 05 February 2026 02:33:02 +0000 (0:00:00.833) 0:03:12.743 ***** 2026-02-05 02:33:10.713370 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:10.713381 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:10.713392 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:10.713403 | orchestrator | 2026-02-05 02:33:10.713414 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-05 02:33:10.713424 | orchestrator | Thursday 05 February 2026 02:33:03 +0000 (0:00:00.604) 0:03:13.348 ***** 2026-02-05 02:33:10.713435 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:33:10.713460 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:33:10.713471 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:33:10.713482 | orchestrator | 2026-02-05 02:33:10.713493 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-05 02:33:10.713504 | orchestrator | Thursday 05 February 2026 02:33:04 +0000 (0:00:01.228) 0:03:14.576 ***** 2026-02-05 02:33:10.713515 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 02:33:10.713526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 02:33:10.713537 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 02:33:10.713547 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:10.713558 | orchestrator | 2026-02-05 02:33:10.713569 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-05 02:33:10.713580 | orchestrator | Thursday 05 February 2026 02:33:05 +0000 (0:00:00.684) 0:03:15.260 ***** 2026-02-05 02:33:10.713591 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:10.713602 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:10.713612 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:10.713623 | orchestrator | 2026-02-05 02:33:10.713634 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-05 02:33:10.713645 | orchestrator | 2026-02-05 02:33:10.713655 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 02:33:10.713666 | orchestrator | Thursday 05 February 2026 02:33:05 +0000 (0:00:00.861) 0:03:16.121 ***** 2026-02-05 02:33:10.713678 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:33:10.713690 | orchestrator | 2026-02-05 02:33:10.713701 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 02:33:10.713712 | orchestrator | Thursday 05 February 2026 02:33:06 +0000 (0:00:00.546) 0:03:16.668 ***** 2026-02-05 02:33:10.713723 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:33:10.713734 | orchestrator | 2026-02-05 02:33:10.713744 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 02:33:10.713755 | orchestrator | Thursday 05 February 2026 02:33:07 +0000 (0:00:00.778) 0:03:17.447 ***** 2026-02-05 02:33:10.713766 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:10.713776 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:10.713787 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:10.713798 | orchestrator | 2026-02-05 02:33:10.713809 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 02:33:10.713819 | orchestrator | Thursday 05 February 2026 02:33:07 +0000 (0:00:00.719) 0:03:18.167 ***** 2026-02-05 02:33:10.713830 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:10.713841 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:10.713852 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:10.713862 | orchestrator | 2026-02-05 02:33:10.713873 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 02:33:10.713884 | orchestrator | Thursday 05 February 2026 02:33:08 +0000 (0:00:00.369) 0:03:18.536 ***** 2026-02-05 02:33:10.713895 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:10.713906 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:10.713916 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:10.713927 | orchestrator | 2026-02-05 02:33:10.713937 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 02:33:10.713948 | orchestrator | Thursday 05 February 2026 02:33:08 +0000 (0:00:00.340) 0:03:18.877 ***** 2026-02-05 02:33:10.713959 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:10.713970 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:10.713981 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:10.713991 | orchestrator | 2026-02-05 02:33:10.714009 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 02:33:10.714091 | orchestrator | Thursday 05 February 2026 02:33:09 +0000 (0:00:00.601) 0:03:19.478 ***** 2026-02-05 02:33:10.714112 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:10.714123 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:10.714134 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:10.714145 | orchestrator | 2026-02-05 02:33:10.714156 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 02:33:10.714167 | orchestrator | Thursday 05 February 2026 02:33:10 +0000 (0:00:00.750) 0:03:20.228 ***** 2026-02-05 02:33:10.714178 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:10.714190 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:10.714210 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:10.714229 | orchestrator | 2026-02-05 02:33:10.714250 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 02:33:10.714291 | orchestrator | Thursday 05 February 2026 02:33:10 +0000 (0:00:00.317) 0:03:20.546 ***** 2026-02-05 02:33:10.714311 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:10.714329 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:10.714360 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:34.203913 | orchestrator | 2026-02-05 02:33:34.204025 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 02:33:34.204051 | orchestrator | Thursday 05 February 2026 02:33:10 +0000 (0:00:00.336) 0:03:20.883 ***** 2026-02-05 02:33:34.204070 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.204091 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:34.204110 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:34.204130 | orchestrator | 2026-02-05 02:33:34.204150 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 02:33:34.204162 | orchestrator | Thursday 05 February 2026 02:33:11 +0000 (0:00:01.044) 0:03:21.927 ***** 2026-02-05 02:33:34.204174 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.204185 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:34.204196 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:34.204210 | orchestrator | 2026-02-05 02:33:34.204229 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 02:33:34.204246 | orchestrator | Thursday 05 February 2026 02:33:12 +0000 (0:00:00.732) 0:03:22.660 ***** 2026-02-05 02:33:34.204263 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:34.204282 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:34.204326 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:34.204346 | orchestrator | 2026-02-05 02:33:34.204365 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 02:33:34.204385 | orchestrator | Thursday 05 February 2026 02:33:12 +0000 (0:00:00.311) 0:03:22.972 ***** 2026-02-05 02:33:34.204404 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.204423 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:34.204443 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:34.204463 | orchestrator | 2026-02-05 02:33:34.204483 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 02:33:34.204499 | orchestrator | Thursday 05 February 2026 02:33:13 +0000 (0:00:00.361) 0:03:23.333 ***** 2026-02-05 02:33:34.204513 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:34.204526 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:34.204540 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:34.204553 | orchestrator | 2026-02-05 02:33:34.204566 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 02:33:34.204580 | orchestrator | Thursday 05 February 2026 02:33:13 +0000 (0:00:00.575) 0:03:23.908 ***** 2026-02-05 02:33:34.204593 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:34.204606 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:34.204619 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:34.204632 | orchestrator | 2026-02-05 02:33:34.204645 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 02:33:34.204657 | orchestrator | Thursday 05 February 2026 02:33:14 +0000 (0:00:00.353) 0:03:24.261 ***** 2026-02-05 02:33:34.204670 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:34.204711 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:34.204724 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:34.204736 | orchestrator | 2026-02-05 02:33:34.204749 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 02:33:34.204762 | orchestrator | Thursday 05 February 2026 02:33:14 +0000 (0:00:00.346) 0:03:24.607 ***** 2026-02-05 02:33:34.204775 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:34.204788 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:34.204799 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:34.204810 | orchestrator | 2026-02-05 02:33:34.204821 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 02:33:34.204832 | orchestrator | Thursday 05 February 2026 02:33:14 +0000 (0:00:00.348) 0:03:24.956 ***** 2026-02-05 02:33:34.204842 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:34.204853 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:33:34.204864 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:33:34.204875 | orchestrator | 2026-02-05 02:33:34.204885 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 02:33:34.204896 | orchestrator | Thursday 05 February 2026 02:33:15 +0000 (0:00:00.654) 0:03:25.611 ***** 2026-02-05 02:33:34.204907 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.204918 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:34.204929 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:34.204939 | orchestrator | 2026-02-05 02:33:34.204950 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 02:33:34.204961 | orchestrator | Thursday 05 February 2026 02:33:15 +0000 (0:00:00.374) 0:03:25.986 ***** 2026-02-05 02:33:34.204972 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.204983 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:34.204994 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:34.205004 | orchestrator | 2026-02-05 02:33:34.205015 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 02:33:34.205026 | orchestrator | Thursday 05 February 2026 02:33:16 +0000 (0:00:00.328) 0:03:26.314 ***** 2026-02-05 02:33:34.205037 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.205048 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:34.205058 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:34.205069 | orchestrator | 2026-02-05 02:33:34.205095 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-05 02:33:34.205107 | orchestrator | Thursday 05 February 2026 02:33:16 +0000 (0:00:00.802) 0:03:27.117 ***** 2026-02-05 02:33:34.205118 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.205129 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:34.205139 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:34.205150 | orchestrator | 2026-02-05 02:33:34.205161 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-05 02:33:34.205172 | orchestrator | Thursday 05 February 2026 02:33:17 +0000 (0:00:00.390) 0:03:27.508 ***** 2026-02-05 02:33:34.205183 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:33:34.205194 | orchestrator | 2026-02-05 02:33:34.205205 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-05 02:33:34.205216 | orchestrator | Thursday 05 February 2026 02:33:17 +0000 (0:00:00.570) 0:03:28.078 ***** 2026-02-05 02:33:34.205227 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:33:34.205238 | orchestrator | 2026-02-05 02:33:34.205249 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-05 02:33:34.205281 | orchestrator | Thursday 05 February 2026 02:33:18 +0000 (0:00:00.170) 0:03:28.248 ***** 2026-02-05 02:33:34.205293 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 02:33:34.205406 | orchestrator | 2026-02-05 02:33:34.205418 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-05 02:33:34.205429 | orchestrator | Thursday 05 February 2026 02:33:19 +0000 (0:00:01.537) 0:03:29.785 ***** 2026-02-05 02:33:34.205450 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.205461 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:34.205472 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:34.205484 | orchestrator | 2026-02-05 02:33:34.205495 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-05 02:33:34.205506 | orchestrator | Thursday 05 February 2026 02:33:20 +0000 (0:00:00.420) 0:03:30.206 ***** 2026-02-05 02:33:34.205517 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.205528 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:34.205539 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:34.205550 | orchestrator | 2026-02-05 02:33:34.205561 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-05 02:33:34.205572 | orchestrator | Thursday 05 February 2026 02:33:20 +0000 (0:00:00.508) 0:03:30.714 ***** 2026-02-05 02:33:34.205583 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:33:34.205594 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:33:34.205605 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:33:34.205616 | orchestrator | 2026-02-05 02:33:34.205627 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-05 02:33:34.205638 | orchestrator | Thursday 05 February 2026 02:33:21 +0000 (0:00:01.231) 0:03:31.946 ***** 2026-02-05 02:33:34.205649 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:33:34.205660 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:33:34.205671 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:33:34.205682 | orchestrator | 2026-02-05 02:33:34.205693 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-05 02:33:34.205705 | orchestrator | Thursday 05 February 2026 02:33:22 +0000 (0:00:01.063) 0:03:33.009 ***** 2026-02-05 02:33:34.205716 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:33:34.205727 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:33:34.205738 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:33:34.205749 | orchestrator | 2026-02-05 02:33:34.205760 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-05 02:33:34.205771 | orchestrator | Thursday 05 February 2026 02:33:23 +0000 (0:00:00.685) 0:03:33.694 ***** 2026-02-05 02:33:34.205782 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.205793 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:34.205804 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:34.205815 | orchestrator | 2026-02-05 02:33:34.205826 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-05 02:33:34.205837 | orchestrator | Thursday 05 February 2026 02:33:24 +0000 (0:00:00.649) 0:03:34.344 ***** 2026-02-05 02:33:34.205848 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:33:34.205859 | orchestrator | 2026-02-05 02:33:34.205870 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-05 02:33:34.205882 | orchestrator | Thursday 05 February 2026 02:33:25 +0000 (0:00:01.346) 0:03:35.690 ***** 2026-02-05 02:33:34.205893 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.205904 | orchestrator | 2026-02-05 02:33:34.205915 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-05 02:33:34.205925 | orchestrator | Thursday 05 February 2026 02:33:26 +0000 (0:00:00.753) 0:03:36.444 ***** 2026-02-05 02:33:34.205937 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 02:33:34.205948 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:33:34.205959 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:33:34.205970 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 02:33:34.205981 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-05 02:33:34.205993 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 02:33:34.206004 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 02:33:34.206015 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-05 02:33:34.206103 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-05 02:33:34.206122 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-05 02:33:34.206133 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 02:33:34.206144 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-05 02:33:34.206155 | orchestrator | 2026-02-05 02:33:34.206166 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-05 02:33:34.206177 | orchestrator | Thursday 05 February 2026 02:33:29 +0000 (0:00:03.330) 0:03:39.775 ***** 2026-02-05 02:33:34.206188 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:33:34.206199 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:33:34.206217 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:33:34.206229 | orchestrator | 2026-02-05 02:33:34.206255 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-05 02:33:34.206277 | orchestrator | Thursday 05 February 2026 02:33:31 +0000 (0:00:02.229) 0:03:42.004 ***** 2026-02-05 02:33:34.206288 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.206324 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:34.206345 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:34.206358 | orchestrator | 2026-02-05 02:33:34.206369 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-05 02:33:34.206379 | orchestrator | Thursday 05 February 2026 02:33:32 +0000 (0:00:00.365) 0:03:42.369 ***** 2026-02-05 02:33:34.206390 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:33:34.206401 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:33:34.206412 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:33:34.206422 | orchestrator | 2026-02-05 02:33:34.206433 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-05 02:33:34.206444 | orchestrator | Thursday 05 February 2026 02:33:32 +0000 (0:00:00.334) 0:03:42.704 ***** 2026-02-05 02:33:34.206455 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:33:34.206466 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:33:34.206476 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:33:34.206487 | orchestrator | 2026-02-05 02:33:34.206509 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-05 02:34:16.476601 | orchestrator | Thursday 05 February 2026 02:33:34 +0000 (0:00:01.666) 0:03:44.371 ***** 2026-02-05 02:34:16.476715 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:34:16.476733 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:34:16.476746 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:34:16.476757 | orchestrator | 2026-02-05 02:34:16.476769 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-05 02:34:16.476781 | orchestrator | Thursday 05 February 2026 02:33:35 +0000 (0:00:01.598) 0:03:45.969 ***** 2026-02-05 02:34:16.476792 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:16.476804 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:16.476815 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:16.476826 | orchestrator | 2026-02-05 02:34:16.476837 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-05 02:34:16.476849 | orchestrator | Thursday 05 February 2026 02:33:36 +0000 (0:00:00.349) 0:03:46.319 ***** 2026-02-05 02:34:16.476860 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:34:16.476871 | orchestrator | 2026-02-05 02:34:16.476883 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-05 02:34:16.476894 | orchestrator | Thursday 05 February 2026 02:33:36 +0000 (0:00:00.540) 0:03:46.859 ***** 2026-02-05 02:34:16.476905 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:16.476916 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:16.476927 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:16.476939 | orchestrator | 2026-02-05 02:34:16.476949 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-05 02:34:16.476960 | orchestrator | Thursday 05 February 2026 02:33:37 +0000 (0:00:00.534) 0:03:47.394 ***** 2026-02-05 02:34:16.476971 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:16.477013 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:16.477025 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:16.477036 | orchestrator | 2026-02-05 02:34:16.477047 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-05 02:34:16.477058 | orchestrator | Thursday 05 February 2026 02:33:37 +0000 (0:00:00.332) 0:03:47.727 ***** 2026-02-05 02:34:16.477069 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:34:16.477081 | orchestrator | 2026-02-05 02:34:16.477092 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-05 02:34:16.477103 | orchestrator | Thursday 05 February 2026 02:33:38 +0000 (0:00:00.535) 0:03:48.262 ***** 2026-02-05 02:34:16.477114 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:34:16.477125 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:34:16.477137 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:34:16.477150 | orchestrator | 2026-02-05 02:34:16.477163 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-05 02:34:16.477175 | orchestrator | Thursday 05 February 2026 02:33:39 +0000 (0:00:01.760) 0:03:50.023 ***** 2026-02-05 02:34:16.477187 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:34:16.477200 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:34:16.477213 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:34:16.477226 | orchestrator | 2026-02-05 02:34:16.477239 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-05 02:34:16.477251 | orchestrator | Thursday 05 February 2026 02:33:41 +0000 (0:00:01.163) 0:03:51.186 ***** 2026-02-05 02:34:16.477264 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:34:16.477277 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:34:16.477291 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:34:16.477303 | orchestrator | 2026-02-05 02:34:16.477317 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-05 02:34:16.477360 | orchestrator | Thursday 05 February 2026 02:33:42 +0000 (0:00:01.743) 0:03:52.930 ***** 2026-02-05 02:34:16.477373 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:34:16.477386 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:34:16.477399 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:34:16.477412 | orchestrator | 2026-02-05 02:34:16.477425 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-05 02:34:16.477438 | orchestrator | Thursday 05 February 2026 02:33:45 +0000 (0:00:02.889) 0:03:55.819 ***** 2026-02-05 02:34:16.477451 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:34:16.477464 | orchestrator | 2026-02-05 02:34:16.477476 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-05 02:34:16.477487 | orchestrator | Thursday 05 February 2026 02:33:46 +0000 (0:00:00.843) 0:03:56.662 ***** 2026-02-05 02:34:16.477498 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:16.477510 | orchestrator | 2026-02-05 02:34:16.477538 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-05 02:34:16.477549 | orchestrator | Thursday 05 February 2026 02:33:47 +0000 (0:00:01.374) 0:03:58.037 ***** 2026-02-05 02:34:16.477560 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:34:16.477571 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:34:16.477582 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:16.477593 | orchestrator | 2026-02-05 02:34:16.477604 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-05 02:34:16.477615 | orchestrator | Thursday 05 February 2026 02:33:57 +0000 (0:00:09.510) 0:04:07.548 ***** 2026-02-05 02:34:16.477625 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:16.477636 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:16.477647 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:16.477658 | orchestrator | 2026-02-05 02:34:16.477669 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-05 02:34:16.477680 | orchestrator | Thursday 05 February 2026 02:33:57 +0000 (0:00:00.578) 0:04:08.127 ***** 2026-02-05 02:34:16.477721 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__429501e51c7c2a653ccf472d3695011d7e2033d1'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-05 02:34:16.477736 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__429501e51c7c2a653ccf472d3695011d7e2033d1'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-05 02:34:16.477749 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__429501e51c7c2a653ccf472d3695011d7e2033d1'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-05 02:34:16.477761 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__429501e51c7c2a653ccf472d3695011d7e2033d1'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-05 02:34:16.477772 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__429501e51c7c2a653ccf472d3695011d7e2033d1'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-05 02:34:16.477785 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__429501e51c7c2a653ccf472d3695011d7e2033d1'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__429501e51c7c2a653ccf472d3695011d7e2033d1'}])  2026-02-05 02:34:16.477797 | orchestrator | 2026-02-05 02:34:16.477809 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 02:34:16.477820 | orchestrator | Thursday 05 February 2026 02:34:12 +0000 (0:00:14.977) 0:04:23.105 ***** 2026-02-05 02:34:16.477831 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:16.477842 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:16.477853 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:16.477864 | orchestrator | 2026-02-05 02:34:16.477874 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-05 02:34:16.477885 | orchestrator | Thursday 05 February 2026 02:34:13 +0000 (0:00:00.336) 0:04:23.441 ***** 2026-02-05 02:34:16.477896 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:34:16.477907 | orchestrator | 2026-02-05 02:34:16.477918 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-05 02:34:16.477929 | orchestrator | Thursday 05 February 2026 02:34:14 +0000 (0:00:00.793) 0:04:24.235 ***** 2026-02-05 02:34:16.477940 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:16.477951 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:34:16.477961 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:34:16.477972 | orchestrator | 2026-02-05 02:34:16.477983 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-05 02:34:16.477994 | orchestrator | Thursday 05 February 2026 02:34:14 +0000 (0:00:00.331) 0:04:24.566 ***** 2026-02-05 02:34:16.478012 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:16.478079 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:16.478090 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:16.478101 | orchestrator | 2026-02-05 02:34:16.478118 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-05 02:34:16.478130 | orchestrator | Thursday 05 February 2026 02:34:14 +0000 (0:00:00.333) 0:04:24.900 ***** 2026-02-05 02:34:16.478140 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 02:34:16.478152 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 02:34:16.478163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 02:34:16.478174 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:16.478185 | orchestrator | 2026-02-05 02:34:16.478196 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-05 02:34:16.478207 | orchestrator | Thursday 05 February 2026 02:34:15 +0000 (0:00:01.131) 0:04:26.031 ***** 2026-02-05 02:34:16.478217 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:16.478228 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:34:16.478239 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:34:16.478250 | orchestrator | 2026-02-05 02:34:16.478261 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-05 02:34:16.478272 | orchestrator | 2026-02-05 02:34:16.478282 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 02:34:16.478301 | orchestrator | Thursday 05 February 2026 02:34:16 +0000 (0:00:00.611) 0:04:26.643 ***** 2026-02-05 02:34:43.682179 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:34:43.682298 | orchestrator | 2026-02-05 02:34:43.682316 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 02:34:43.682330 | orchestrator | Thursday 05 February 2026 02:34:17 +0000 (0:00:00.738) 0:04:27.382 ***** 2026-02-05 02:34:43.682392 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:34:43.682405 | orchestrator | 2026-02-05 02:34:43.682416 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 02:34:43.682427 | orchestrator | Thursday 05 February 2026 02:34:17 +0000 (0:00:00.580) 0:04:27.962 ***** 2026-02-05 02:34:43.682438 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:43.682451 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:34:43.682462 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:34:43.682473 | orchestrator | 2026-02-05 02:34:43.682484 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 02:34:43.682495 | orchestrator | Thursday 05 February 2026 02:34:18 +0000 (0:00:00.723) 0:04:28.685 ***** 2026-02-05 02:34:43.682506 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:43.682518 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:43.682529 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:43.682540 | orchestrator | 2026-02-05 02:34:43.682551 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 02:34:43.682562 | orchestrator | Thursday 05 February 2026 02:34:19 +0000 (0:00:00.546) 0:04:29.232 ***** 2026-02-05 02:34:43.682573 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:43.682584 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:43.682595 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:43.682606 | orchestrator | 2026-02-05 02:34:43.682617 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 02:34:43.682628 | orchestrator | Thursday 05 February 2026 02:34:19 +0000 (0:00:00.321) 0:04:29.554 ***** 2026-02-05 02:34:43.682639 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:43.682650 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:43.682662 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:43.682675 | orchestrator | 2026-02-05 02:34:43.682688 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 02:34:43.682727 | orchestrator | Thursday 05 February 2026 02:34:19 +0000 (0:00:00.322) 0:04:29.877 ***** 2026-02-05 02:34:43.682741 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:43.682753 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:34:43.682766 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:34:43.682778 | orchestrator | 2026-02-05 02:34:43.682792 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 02:34:43.682806 | orchestrator | Thursday 05 February 2026 02:34:20 +0000 (0:00:00.694) 0:04:30.571 ***** 2026-02-05 02:34:43.682824 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:43.682842 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:43.682861 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:43.682880 | orchestrator | 2026-02-05 02:34:43.682898 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 02:34:43.682916 | orchestrator | Thursday 05 February 2026 02:34:20 +0000 (0:00:00.541) 0:04:31.113 ***** 2026-02-05 02:34:43.682935 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:43.682954 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:43.682973 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:43.682993 | orchestrator | 2026-02-05 02:34:43.683014 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 02:34:43.683035 | orchestrator | Thursday 05 February 2026 02:34:21 +0000 (0:00:00.325) 0:04:31.438 ***** 2026-02-05 02:34:43.683054 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:43.683065 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:34:43.683076 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:34:43.683087 | orchestrator | 2026-02-05 02:34:43.683098 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 02:34:43.683109 | orchestrator | Thursday 05 February 2026 02:34:21 +0000 (0:00:00.731) 0:04:32.170 ***** 2026-02-05 02:34:43.683120 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:43.683131 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:34:43.683141 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:34:43.683152 | orchestrator | 2026-02-05 02:34:43.683163 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 02:34:43.683174 | orchestrator | Thursday 05 February 2026 02:34:22 +0000 (0:00:00.706) 0:04:32.877 ***** 2026-02-05 02:34:43.683185 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:43.683196 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:43.683207 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:43.683218 | orchestrator | 2026-02-05 02:34:43.683229 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 02:34:43.683254 | orchestrator | Thursday 05 February 2026 02:34:23 +0000 (0:00:00.600) 0:04:33.477 ***** 2026-02-05 02:34:43.683266 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:43.683277 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:34:43.683287 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:34:43.683298 | orchestrator | 2026-02-05 02:34:43.683309 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 02:34:43.683320 | orchestrator | Thursday 05 February 2026 02:34:23 +0000 (0:00:00.341) 0:04:33.818 ***** 2026-02-05 02:34:43.683330 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:43.683386 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:43.683398 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:43.683409 | orchestrator | 2026-02-05 02:34:43.683420 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 02:34:43.683431 | orchestrator | Thursday 05 February 2026 02:34:23 +0000 (0:00:00.314) 0:04:34.132 ***** 2026-02-05 02:34:43.683441 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:43.683452 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:43.683463 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:43.683474 | orchestrator | 2026-02-05 02:34:43.683485 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 02:34:43.683516 | orchestrator | Thursday 05 February 2026 02:34:24 +0000 (0:00:00.307) 0:04:34.440 ***** 2026-02-05 02:34:43.683539 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:43.683550 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:43.683561 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:43.683572 | orchestrator | 2026-02-05 02:34:43.683582 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 02:34:43.683593 | orchestrator | Thursday 05 February 2026 02:34:24 +0000 (0:00:00.605) 0:04:35.046 ***** 2026-02-05 02:34:43.683604 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:43.683615 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:43.683626 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:43.683636 | orchestrator | 2026-02-05 02:34:43.683647 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 02:34:43.683658 | orchestrator | Thursday 05 February 2026 02:34:25 +0000 (0:00:00.302) 0:04:35.348 ***** 2026-02-05 02:34:43.683668 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:43.683679 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:43.683690 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:43.683701 | orchestrator | 2026-02-05 02:34:43.683711 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 02:34:43.683722 | orchestrator | Thursday 05 February 2026 02:34:25 +0000 (0:00:00.342) 0:04:35.691 ***** 2026-02-05 02:34:43.683733 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:43.683744 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:34:43.683754 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:34:43.683765 | orchestrator | 2026-02-05 02:34:43.683776 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 02:34:43.683786 | orchestrator | Thursday 05 February 2026 02:34:25 +0000 (0:00:00.332) 0:04:36.024 ***** 2026-02-05 02:34:43.683797 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:43.683808 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:34:43.683819 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:34:43.683829 | orchestrator | 2026-02-05 02:34:43.683840 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 02:34:43.683851 | orchestrator | Thursday 05 February 2026 02:34:26 +0000 (0:00:00.553) 0:04:36.578 ***** 2026-02-05 02:34:43.683861 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:43.683872 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:34:43.683883 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:34:43.683893 | orchestrator | 2026-02-05 02:34:43.683904 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-05 02:34:43.683915 | orchestrator | Thursday 05 February 2026 02:34:26 +0000 (0:00:00.573) 0:04:37.151 ***** 2026-02-05 02:34:43.683926 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 02:34:43.683937 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 02:34:43.683949 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 02:34:43.683960 | orchestrator | 2026-02-05 02:34:43.683970 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-05 02:34:43.683981 | orchestrator | Thursday 05 February 2026 02:34:27 +0000 (0:00:00.883) 0:04:38.035 ***** 2026-02-05 02:34:43.683992 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:34:43.684003 | orchestrator | 2026-02-05 02:34:43.684014 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-05 02:34:43.684025 | orchestrator | Thursday 05 February 2026 02:34:28 +0000 (0:00:00.776) 0:04:38.811 ***** 2026-02-05 02:34:43.684035 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:34:43.684046 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:34:43.684057 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:34:43.684068 | orchestrator | 2026-02-05 02:34:43.684078 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-05 02:34:43.684089 | orchestrator | Thursday 05 February 2026 02:34:29 +0000 (0:00:00.681) 0:04:39.493 ***** 2026-02-05 02:34:43.684106 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:34:43.684117 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:34:43.684128 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:34:43.684139 | orchestrator | 2026-02-05 02:34:43.684150 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-05 02:34:43.684161 | orchestrator | Thursday 05 February 2026 02:34:29 +0000 (0:00:00.337) 0:04:39.831 ***** 2026-02-05 02:34:43.684172 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 02:34:43.684183 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 02:34:43.684194 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 02:34:43.684205 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-05 02:34:43.684216 | orchestrator | 2026-02-05 02:34:43.684227 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-05 02:34:43.684243 | orchestrator | Thursday 05 February 2026 02:34:40 +0000 (0:00:11.092) 0:04:50.923 ***** 2026-02-05 02:34:43.684254 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:34:43.684265 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:34:43.684276 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:34:43.684287 | orchestrator | 2026-02-05 02:34:43.684297 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-05 02:34:43.684308 | orchestrator | Thursday 05 February 2026 02:34:41 +0000 (0:00:00.605) 0:04:51.528 ***** 2026-02-05 02:34:43.684319 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-05 02:34:43.684330 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 02:34:43.684357 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 02:34:43.684369 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-05 02:34:43.684380 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:34:43.684390 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:34:43.684401 | orchestrator | 2026-02-05 02:34:43.684412 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-05 02:34:43.684422 | orchestrator | Thursday 05 February 2026 02:34:43 +0000 (0:00:02.122) 0:04:53.651 ***** 2026-02-05 02:34:43.684433 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-05 02:34:43.684452 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 02:35:44.983639 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 02:35:44.983753 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 02:35:44.983770 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-05 02:35:44.983782 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-05 02:35:44.983793 | orchestrator | 2026-02-05 02:35:44.983805 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-05 02:35:44.983817 | orchestrator | Thursday 05 February 2026 02:34:44 +0000 (0:00:01.230) 0:04:54.882 ***** 2026-02-05 02:35:44.983828 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:35:44.983839 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:35:44.983850 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:35:44.983861 | orchestrator | 2026-02-05 02:35:44.983872 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-05 02:35:44.983884 | orchestrator | Thursday 05 February 2026 02:34:45 +0000 (0:00:00.715) 0:04:55.597 ***** 2026-02-05 02:35:44.983895 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:35:44.983906 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:35:44.983917 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:35:44.983928 | orchestrator | 2026-02-05 02:35:44.983939 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-05 02:35:44.983950 | orchestrator | Thursday 05 February 2026 02:34:46 +0000 (0:00:00.594) 0:04:56.192 ***** 2026-02-05 02:35:44.983961 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:35:44.983972 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:35:44.983983 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:35:44.984015 | orchestrator | 2026-02-05 02:35:44.984027 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-05 02:35:44.984038 | orchestrator | Thursday 05 February 2026 02:34:46 +0000 (0:00:00.329) 0:04:56.521 ***** 2026-02-05 02:35:44.984049 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:35:44.984060 | orchestrator | 2026-02-05 02:35:44.984071 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-05 02:35:44.984083 | orchestrator | Thursday 05 February 2026 02:34:46 +0000 (0:00:00.570) 0:04:57.091 ***** 2026-02-05 02:35:44.984094 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:35:44.984105 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:35:44.984116 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:35:44.984127 | orchestrator | 2026-02-05 02:35:44.984138 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-05 02:35:44.984148 | orchestrator | Thursday 05 February 2026 02:34:47 +0000 (0:00:00.604) 0:04:57.696 ***** 2026-02-05 02:35:44.984159 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:35:44.984170 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:35:44.984180 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:35:44.984192 | orchestrator | 2026-02-05 02:35:44.984205 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-05 02:35:44.984218 | orchestrator | Thursday 05 February 2026 02:34:47 +0000 (0:00:00.356) 0:04:58.053 ***** 2026-02-05 02:35:44.984230 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:35:44.984244 | orchestrator | 2026-02-05 02:35:44.984256 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-05 02:35:44.984269 | orchestrator | Thursday 05 February 2026 02:34:48 +0000 (0:00:00.577) 0:04:58.630 ***** 2026-02-05 02:35:44.984282 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:35:44.984295 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:35:44.984307 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:35:44.984320 | orchestrator | 2026-02-05 02:35:44.984332 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-05 02:35:44.984345 | orchestrator | Thursday 05 February 2026 02:34:49 +0000 (0:00:01.424) 0:05:00.055 ***** 2026-02-05 02:35:44.984357 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:35:44.984370 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:35:44.984382 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:35:44.984394 | orchestrator | 2026-02-05 02:35:44.984425 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-05 02:35:44.984438 | orchestrator | Thursday 05 February 2026 02:34:51 +0000 (0:00:01.164) 0:05:01.220 ***** 2026-02-05 02:35:44.984450 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:35:44.984463 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:35:44.984475 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:35:44.984488 | orchestrator | 2026-02-05 02:35:44.984501 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-05 02:35:44.984514 | orchestrator | Thursday 05 February 2026 02:34:52 +0000 (0:00:01.733) 0:05:02.954 ***** 2026-02-05 02:35:44.984527 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:35:44.984553 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:35:44.984564 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:35:44.984575 | orchestrator | 2026-02-05 02:35:44.984586 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-05 02:35:44.984597 | orchestrator | Thursday 05 February 2026 02:34:54 +0000 (0:00:01.967) 0:05:04.921 ***** 2026-02-05 02:35:44.984608 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:35:44.984618 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:35:44.984629 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-05 02:35:44.984640 | orchestrator | 2026-02-05 02:35:44.984651 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-05 02:35:44.984670 | orchestrator | Thursday 05 February 2026 02:34:55 +0000 (0:00:00.647) 0:05:05.569 ***** 2026-02-05 02:35:44.984681 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-05 02:35:44.984692 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-05 02:35:44.984704 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-05 02:35:44.984730 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-05 02:35:44.984742 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-05 02:35:44.984753 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 02:35:44.984764 | orchestrator | 2026-02-05 02:35:44.984775 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-05 02:35:44.984786 | orchestrator | Thursday 05 February 2026 02:35:25 +0000 (0:00:30.307) 0:05:35.876 ***** 2026-02-05 02:35:44.984797 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 02:35:44.984808 | orchestrator | 2026-02-05 02:35:44.984819 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-05 02:35:44.984830 | orchestrator | Thursday 05 February 2026 02:35:27 +0000 (0:00:01.501) 0:05:37.378 ***** 2026-02-05 02:35:44.984841 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:35:44.984852 | orchestrator | 2026-02-05 02:35:44.984862 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-05 02:35:44.984873 | orchestrator | Thursday 05 February 2026 02:35:27 +0000 (0:00:00.285) 0:05:37.663 ***** 2026-02-05 02:35:44.984884 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:35:44.984895 | orchestrator | 2026-02-05 02:35:44.984905 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-05 02:35:44.984916 | orchestrator | Thursday 05 February 2026 02:35:27 +0000 (0:00:00.171) 0:05:37.835 ***** 2026-02-05 02:35:44.984927 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-05 02:35:44.984938 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-05 02:35:44.984948 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-05 02:35:44.984959 | orchestrator | 2026-02-05 02:35:44.984970 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-05 02:35:44.984981 | orchestrator | Thursday 05 February 2026 02:35:34 +0000 (0:00:06.444) 0:05:44.280 ***** 2026-02-05 02:35:44.984992 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-05 02:35:44.985003 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-05 02:35:44.985014 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-05 02:35:44.985025 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-05 02:35:44.985036 | orchestrator | 2026-02-05 02:35:44.985047 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 02:35:44.985058 | orchestrator | Thursday 05 February 2026 02:35:39 +0000 (0:00:05.166) 0:05:49.446 ***** 2026-02-05 02:35:44.985069 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:35:44.985080 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:35:44.985091 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:35:44.985101 | orchestrator | 2026-02-05 02:35:44.985112 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-05 02:35:44.985123 | orchestrator | Thursday 05 February 2026 02:35:39 +0000 (0:00:00.666) 0:05:50.112 ***** 2026-02-05 02:35:44.985134 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:35:44.985145 | orchestrator | 2026-02-05 02:35:44.985163 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-05 02:35:44.985174 | orchestrator | Thursday 05 February 2026 02:35:40 +0000 (0:00:00.530) 0:05:50.642 ***** 2026-02-05 02:35:44.985184 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:35:44.985195 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:35:44.985206 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:35:44.985217 | orchestrator | 2026-02-05 02:35:44.985228 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-05 02:35:44.985239 | orchestrator | Thursday 05 February 2026 02:35:41 +0000 (0:00:00.570) 0:05:51.213 ***** 2026-02-05 02:35:44.985250 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:35:44.985261 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:35:44.985271 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:35:44.985282 | orchestrator | 2026-02-05 02:35:44.985293 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-05 02:35:44.985304 | orchestrator | Thursday 05 February 2026 02:35:42 +0000 (0:00:01.180) 0:05:52.393 ***** 2026-02-05 02:35:44.985315 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 02:35:44.985326 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 02:35:44.985342 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 02:35:44.985353 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:35:44.985364 | orchestrator | 2026-02-05 02:35:44.985375 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-05 02:35:44.985386 | orchestrator | Thursday 05 February 2026 02:35:42 +0000 (0:00:00.623) 0:05:53.017 ***** 2026-02-05 02:35:44.985397 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:35:44.985492 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:35:44.985513 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:35:44.985524 | orchestrator | 2026-02-05 02:35:44.985535 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-05 02:35:44.985546 | orchestrator | 2026-02-05 02:35:44.985557 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 02:35:44.985569 | orchestrator | Thursday 05 February 2026 02:35:43 +0000 (0:00:00.831) 0:05:53.849 ***** 2026-02-05 02:35:44.985580 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:35:44.985592 | orchestrator | 2026-02-05 02:35:44.985603 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 02:35:44.985614 | orchestrator | Thursday 05 February 2026 02:35:44 +0000 (0:00:00.550) 0:05:54.399 ***** 2026-02-05 02:35:44.985633 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:36:00.810224 | orchestrator | 2026-02-05 02:36:00.810363 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 02:36:00.810389 | orchestrator | Thursday 05 February 2026 02:35:44 +0000 (0:00:00.754) 0:05:55.154 ***** 2026-02-05 02:36:00.810408 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:36:00.810474 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:36:00.810494 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:36:00.810515 | orchestrator | 2026-02-05 02:36:00.810535 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 02:36:00.810555 | orchestrator | Thursday 05 February 2026 02:35:45 +0000 (0:00:00.336) 0:05:55.490 ***** 2026-02-05 02:36:00.810575 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.810597 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.810617 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.810637 | orchestrator | 2026-02-05 02:36:00.810657 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 02:36:00.810679 | orchestrator | Thursday 05 February 2026 02:35:45 +0000 (0:00:00.649) 0:05:56.140 ***** 2026-02-05 02:36:00.810700 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.810720 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.810776 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.810795 | orchestrator | 2026-02-05 02:36:00.810812 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 02:36:00.810832 | orchestrator | Thursday 05 February 2026 02:35:46 +0000 (0:00:00.671) 0:05:56.811 ***** 2026-02-05 02:36:00.810852 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.810872 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.810889 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.810907 | orchestrator | 2026-02-05 02:36:00.810925 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 02:36:00.810943 | orchestrator | Thursday 05 February 2026 02:35:47 +0000 (0:00:00.940) 0:05:57.752 ***** 2026-02-05 02:36:00.810962 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:36:00.810980 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:36:00.811000 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:36:00.811019 | orchestrator | 2026-02-05 02:36:00.811036 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 02:36:00.811055 | orchestrator | Thursday 05 February 2026 02:35:47 +0000 (0:00:00.355) 0:05:58.108 ***** 2026-02-05 02:36:00.811095 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:36:00.811116 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:36:00.811135 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:36:00.811153 | orchestrator | 2026-02-05 02:36:00.811172 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 02:36:00.811192 | orchestrator | Thursday 05 February 2026 02:35:48 +0000 (0:00:00.311) 0:05:58.420 ***** 2026-02-05 02:36:00.811210 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:36:00.811229 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:36:00.811247 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:36:00.811265 | orchestrator | 2026-02-05 02:36:00.811283 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 02:36:00.811301 | orchestrator | Thursday 05 February 2026 02:35:48 +0000 (0:00:00.329) 0:05:58.749 ***** 2026-02-05 02:36:00.811319 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.811337 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.811355 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.811373 | orchestrator | 2026-02-05 02:36:00.811391 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 02:36:00.811409 | orchestrator | Thursday 05 February 2026 02:35:49 +0000 (0:00:00.966) 0:05:59.715 ***** 2026-02-05 02:36:00.811450 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.811470 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.811489 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.811509 | orchestrator | 2026-02-05 02:36:00.811527 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 02:36:00.811547 | orchestrator | Thursday 05 February 2026 02:35:50 +0000 (0:00:00.699) 0:06:00.415 ***** 2026-02-05 02:36:00.811566 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:36:00.811586 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:36:00.811606 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:36:00.811625 | orchestrator | 2026-02-05 02:36:00.811644 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 02:36:00.811664 | orchestrator | Thursday 05 February 2026 02:35:50 +0000 (0:00:00.338) 0:06:00.753 ***** 2026-02-05 02:36:00.811684 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:36:00.811703 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:36:00.811723 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:36:00.811742 | orchestrator | 2026-02-05 02:36:00.811762 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 02:36:00.811802 | orchestrator | Thursday 05 February 2026 02:35:50 +0000 (0:00:00.321) 0:06:01.075 ***** 2026-02-05 02:36:00.811823 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.811841 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.811861 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.811880 | orchestrator | 2026-02-05 02:36:00.811899 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 02:36:00.811932 | orchestrator | Thursday 05 February 2026 02:35:51 +0000 (0:00:00.637) 0:06:01.712 ***** 2026-02-05 02:36:00.811949 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.811966 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.811984 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.812002 | orchestrator | 2026-02-05 02:36:00.812021 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 02:36:00.812039 | orchestrator | Thursday 05 February 2026 02:35:51 +0000 (0:00:00.351) 0:06:02.064 ***** 2026-02-05 02:36:00.812058 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.812075 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.812094 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.812114 | orchestrator | 2026-02-05 02:36:00.812133 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 02:36:00.812152 | orchestrator | Thursday 05 February 2026 02:35:52 +0000 (0:00:00.366) 0:06:02.430 ***** 2026-02-05 02:36:00.812163 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:36:00.812175 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:36:00.812186 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:36:00.812197 | orchestrator | 2026-02-05 02:36:00.812208 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 02:36:00.812244 | orchestrator | Thursday 05 February 2026 02:35:52 +0000 (0:00:00.301) 0:06:02.732 ***** 2026-02-05 02:36:00.812256 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:36:00.812267 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:36:00.812278 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:36:00.812289 | orchestrator | 2026-02-05 02:36:00.812300 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 02:36:00.812311 | orchestrator | Thursday 05 February 2026 02:35:53 +0000 (0:00:00.544) 0:06:03.277 ***** 2026-02-05 02:36:00.812322 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:36:00.812332 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:36:00.812343 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:36:00.812354 | orchestrator | 2026-02-05 02:36:00.812365 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 02:36:00.812377 | orchestrator | Thursday 05 February 2026 02:35:53 +0000 (0:00:00.315) 0:06:03.593 ***** 2026-02-05 02:36:00.812396 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.812414 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.812506 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.812525 | orchestrator | 2026-02-05 02:36:00.812543 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 02:36:00.812563 | orchestrator | Thursday 05 February 2026 02:35:53 +0000 (0:00:00.348) 0:06:03.941 ***** 2026-02-05 02:36:00.812580 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.812599 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.812618 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.812637 | orchestrator | 2026-02-05 02:36:00.812655 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-05 02:36:00.812671 | orchestrator | Thursday 05 February 2026 02:35:54 +0000 (0:00:00.770) 0:06:04.712 ***** 2026-02-05 02:36:00.812686 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.812696 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.812705 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.812715 | orchestrator | 2026-02-05 02:36:00.812724 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-05 02:36:00.812734 | orchestrator | Thursday 05 February 2026 02:35:54 +0000 (0:00:00.330) 0:06:05.043 ***** 2026-02-05 02:36:00.812744 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 02:36:00.812754 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 02:36:00.812764 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 02:36:00.812774 | orchestrator | 2026-02-05 02:36:00.812795 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-05 02:36:00.812805 | orchestrator | Thursday 05 February 2026 02:35:55 +0000 (0:00:00.591) 0:06:05.634 ***** 2026-02-05 02:36:00.812816 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:36:00.812833 | orchestrator | 2026-02-05 02:36:00.812850 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-05 02:36:00.812866 | orchestrator | Thursday 05 February 2026 02:35:56 +0000 (0:00:00.648) 0:06:06.282 ***** 2026-02-05 02:36:00.812881 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:36:00.812896 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:36:00.812912 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:36:00.812929 | orchestrator | 2026-02-05 02:36:00.812946 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-05 02:36:00.812962 | orchestrator | Thursday 05 February 2026 02:35:56 +0000 (0:00:00.286) 0:06:06.568 ***** 2026-02-05 02:36:00.812977 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:36:00.812987 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:36:00.812996 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:36:00.813006 | orchestrator | 2026-02-05 02:36:00.813015 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-05 02:36:00.813025 | orchestrator | Thursday 05 February 2026 02:35:56 +0000 (0:00:00.273) 0:06:06.842 ***** 2026-02-05 02:36:00.813035 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.813044 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.813054 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.813064 | orchestrator | 2026-02-05 02:36:00.813073 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-05 02:36:00.813083 | orchestrator | Thursday 05 February 2026 02:35:57 +0000 (0:00:00.592) 0:06:07.434 ***** 2026-02-05 02:36:00.813093 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:36:00.813102 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:36:00.813112 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:36:00.813121 | orchestrator | 2026-02-05 02:36:00.813139 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-05 02:36:00.813149 | orchestrator | Thursday 05 February 2026 02:35:57 +0000 (0:00:00.445) 0:06:07.880 ***** 2026-02-05 02:36:00.813159 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-05 02:36:00.813170 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-05 02:36:00.813180 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-05 02:36:00.813190 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-05 02:36:00.813199 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-05 02:36:00.813209 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-05 02:36:00.813218 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-05 02:36:00.813228 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-05 02:36:00.813238 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-05 02:36:00.813258 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-05 02:37:07.671567 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-05 02:37:07.671711 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-05 02:37:07.671738 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-05 02:37:07.671759 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-05 02:37:07.671814 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-05 02:37:07.671837 | orchestrator | 2026-02-05 02:37:07.671859 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-05 02:37:07.671881 | orchestrator | Thursday 05 February 2026 02:36:00 +0000 (0:00:03.090) 0:06:10.971 ***** 2026-02-05 02:37:07.671902 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:07.671923 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:07.671943 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:07.671962 | orchestrator | 2026-02-05 02:37:07.671983 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-05 02:37:07.672003 | orchestrator | Thursday 05 February 2026 02:36:01 +0000 (0:00:00.307) 0:06:11.278 ***** 2026-02-05 02:37:07.672024 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:37:07.672046 | orchestrator | 2026-02-05 02:37:07.672068 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-05 02:37:07.672090 | orchestrator | Thursday 05 February 2026 02:36:01 +0000 (0:00:00.742) 0:06:12.021 ***** 2026-02-05 02:37:07.672110 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-05 02:37:07.672127 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-05 02:37:07.672141 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-05 02:37:07.672156 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-05 02:37:07.672168 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-05 02:37:07.672180 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-05 02:37:07.672191 | orchestrator | 2026-02-05 02:37:07.672202 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-05 02:37:07.672213 | orchestrator | Thursday 05 February 2026 02:36:02 +0000 (0:00:01.092) 0:06:13.114 ***** 2026-02-05 02:37:07.672223 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:37:07.672234 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 02:37:07.672245 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 02:37:07.672256 | orchestrator | 2026-02-05 02:37:07.672267 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-05 02:37:07.672278 | orchestrator | Thursday 05 February 2026 02:36:05 +0000 (0:00:02.323) 0:06:15.437 ***** 2026-02-05 02:37:07.672289 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 02:37:07.672300 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 02:37:07.672311 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:37:07.672322 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 02:37:07.672333 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 02:37:07.672343 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:37:07.672354 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 02:37:07.672365 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 02:37:07.672376 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:37:07.672387 | orchestrator | 2026-02-05 02:37:07.672398 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-05 02:37:07.672409 | orchestrator | Thursday 05 February 2026 02:36:06 +0000 (0:00:01.089) 0:06:16.527 ***** 2026-02-05 02:37:07.672419 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 02:37:07.672431 | orchestrator | 2026-02-05 02:37:07.672442 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-05 02:37:07.672453 | orchestrator | Thursday 05 February 2026 02:36:08 +0000 (0:00:02.131) 0:06:18.659 ***** 2026-02-05 02:37:07.672478 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:37:07.672523 | orchestrator | 2026-02-05 02:37:07.672542 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-05 02:37:07.672574 | orchestrator | Thursday 05 February 2026 02:36:09 +0000 (0:00:00.706) 0:06:19.365 ***** 2026-02-05 02:37:07.672587 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'}) 2026-02-05 02:37:07.672599 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'}) 2026-02-05 02:37:07.672610 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'}) 2026-02-05 02:37:07.672621 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'}) 2026-02-05 02:37:07.672632 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'}) 2026-02-05 02:37:07.672666 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}) 2026-02-05 02:37:07.672678 | orchestrator | 2026-02-05 02:37:07.672689 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-05 02:37:07.672699 | orchestrator | Thursday 05 February 2026 02:36:50 +0000 (0:00:41.404) 0:07:00.770 ***** 2026-02-05 02:37:07.672710 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:07.672721 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:07.672732 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:07.672743 | orchestrator | 2026-02-05 02:37:07.672754 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-05 02:37:07.672764 | orchestrator | Thursday 05 February 2026 02:36:50 +0000 (0:00:00.294) 0:07:01.065 ***** 2026-02-05 02:37:07.672778 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:37:07.672797 | orchestrator | 2026-02-05 02:37:07.672816 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-05 02:37:07.672832 | orchestrator | Thursday 05 February 2026 02:36:51 +0000 (0:00:00.633) 0:07:01.698 ***** 2026-02-05 02:37:07.672850 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:37:07.672896 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:37:07.672916 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:37:07.672934 | orchestrator | 2026-02-05 02:37:07.672954 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-05 02:37:07.672966 | orchestrator | Thursday 05 February 2026 02:36:52 +0000 (0:00:00.627) 0:07:02.326 ***** 2026-02-05 02:37:07.672978 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:37:07.672989 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:37:07.673000 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:37:07.673011 | orchestrator | 2026-02-05 02:37:07.673022 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-05 02:37:07.673032 | orchestrator | Thursday 05 February 2026 02:36:54 +0000 (0:00:02.477) 0:07:04.803 ***** 2026-02-05 02:37:07.673043 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:37:07.673054 | orchestrator | 2026-02-05 02:37:07.673071 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-05 02:37:07.673090 | orchestrator | Thursday 05 February 2026 02:36:55 +0000 (0:00:00.743) 0:07:05.547 ***** 2026-02-05 02:37:07.673107 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:37:07.673125 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:37:07.673143 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:37:07.673161 | orchestrator | 2026-02-05 02:37:07.673179 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-05 02:37:07.673220 | orchestrator | Thursday 05 February 2026 02:36:56 +0000 (0:00:01.177) 0:07:06.725 ***** 2026-02-05 02:37:07.673244 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:37:07.673255 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:37:07.673266 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:37:07.673277 | orchestrator | 2026-02-05 02:37:07.673288 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-05 02:37:07.673299 | orchestrator | Thursday 05 February 2026 02:36:57 +0000 (0:00:01.189) 0:07:07.914 ***** 2026-02-05 02:37:07.673310 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:37:07.673321 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:37:07.673332 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:37:07.673342 | orchestrator | 2026-02-05 02:37:07.673353 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-05 02:37:07.673364 | orchestrator | Thursday 05 February 2026 02:36:59 +0000 (0:00:01.979) 0:07:09.894 ***** 2026-02-05 02:37:07.673375 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:07.673386 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:07.673397 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:07.673407 | orchestrator | 2026-02-05 02:37:07.673418 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-05 02:37:07.673429 | orchestrator | Thursday 05 February 2026 02:37:00 +0000 (0:00:00.353) 0:07:10.247 ***** 2026-02-05 02:37:07.673440 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:07.673451 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:07.673462 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:07.673472 | orchestrator | 2026-02-05 02:37:07.673483 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-05 02:37:07.673518 | orchestrator | Thursday 05 February 2026 02:37:00 +0000 (0:00:00.334) 0:07:10.582 ***** 2026-02-05 02:37:07.673529 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-02-05 02:37:07.673549 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-05 02:37:07.673560 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 02:37:07.673571 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-02-05 02:37:07.673582 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-02-05 02:37:07.673593 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-05 02:37:07.673603 | orchestrator | 2026-02-05 02:37:07.673614 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-05 02:37:07.673625 | orchestrator | Thursday 05 February 2026 02:37:01 +0000 (0:00:01.015) 0:07:11.597 ***** 2026-02-05 02:37:07.673636 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-05 02:37:07.673647 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-05 02:37:07.673657 | orchestrator | changed: [testbed-node-5] => (item=0) 2026-02-05 02:37:07.673668 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-02-05 02:37:07.673687 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-05 02:37:07.673710 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-05 02:37:07.673737 | orchestrator | 2026-02-05 02:37:07.673757 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-05 02:37:07.673775 | orchestrator | Thursday 05 February 2026 02:37:03 +0000 (0:00:02.473) 0:07:14.071 ***** 2026-02-05 02:37:07.673794 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-05 02:37:07.673811 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-05 02:37:07.673831 | orchestrator | changed: [testbed-node-5] => (item=0) 2026-02-05 02:37:07.673850 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-05 02:37:07.673887 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-05 02:37:36.840623 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-02-05 02:37:36.840732 | orchestrator | 2026-02-05 02:37:36.840750 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-05 02:37:36.840763 | orchestrator | Thursday 05 February 2026 02:37:07 +0000 (0:00:03.766) 0:07:17.838 ***** 2026-02-05 02:37:36.840779 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.840799 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:36.840860 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-05 02:37:36.840882 | orchestrator | 2026-02-05 02:37:36.840902 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-05 02:37:36.840919 | orchestrator | Thursday 05 February 2026 02:37:10 +0000 (0:00:02.723) 0:07:20.561 ***** 2026-02-05 02:37:36.840936 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.840953 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:36.840972 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-05 02:37:36.840991 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-05 02:37:36.841010 | orchestrator | 2026-02-05 02:37:36.841028 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-05 02:37:36.841046 | orchestrator | Thursday 05 February 2026 02:37:22 +0000 (0:00:12.591) 0:07:33.153 ***** 2026-02-05 02:37:36.841064 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.841082 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:36.841099 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:36.841117 | orchestrator | 2026-02-05 02:37:36.841137 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 02:37:36.841155 | orchestrator | Thursday 05 February 2026 02:37:23 +0000 (0:00:00.963) 0:07:34.116 ***** 2026-02-05 02:37:36.841174 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.841192 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:36.841210 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:36.841227 | orchestrator | 2026-02-05 02:37:36.841245 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-05 02:37:36.841265 | orchestrator | Thursday 05 February 2026 02:37:24 +0000 (0:00:00.288) 0:07:34.404 ***** 2026-02-05 02:37:36.841283 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:37:36.841301 | orchestrator | 2026-02-05 02:37:36.841318 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-05 02:37:36.841340 | orchestrator | Thursday 05 February 2026 02:37:24 +0000 (0:00:00.680) 0:07:35.085 ***** 2026-02-05 02:37:36.841358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:37:36.841376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:37:36.841396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:37:36.841415 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.841434 | orchestrator | 2026-02-05 02:37:36.841454 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-05 02:37:36.841474 | orchestrator | Thursday 05 February 2026 02:37:25 +0000 (0:00:00.385) 0:07:35.471 ***** 2026-02-05 02:37:36.841491 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.841510 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:36.841562 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:36.841580 | orchestrator | 2026-02-05 02:37:36.841598 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-05 02:37:36.841616 | orchestrator | Thursday 05 February 2026 02:37:25 +0000 (0:00:00.290) 0:07:35.761 ***** 2026-02-05 02:37:36.841635 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.841653 | orchestrator | 2026-02-05 02:37:36.841671 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-05 02:37:36.841689 | orchestrator | Thursday 05 February 2026 02:37:25 +0000 (0:00:00.199) 0:07:35.961 ***** 2026-02-05 02:37:36.841707 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.841726 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:36.841745 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:36.841763 | orchestrator | 2026-02-05 02:37:36.841782 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-05 02:37:36.841800 | orchestrator | Thursday 05 February 2026 02:37:26 +0000 (0:00:00.438) 0:07:36.399 ***** 2026-02-05 02:37:36.841855 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.841874 | orchestrator | 2026-02-05 02:37:36.841911 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-05 02:37:36.841932 | orchestrator | Thursday 05 February 2026 02:37:26 +0000 (0:00:00.224) 0:07:36.623 ***** 2026-02-05 02:37:36.841950 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.841968 | orchestrator | 2026-02-05 02:37:36.841986 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-05 02:37:36.842005 | orchestrator | Thursday 05 February 2026 02:37:26 +0000 (0:00:00.213) 0:07:36.837 ***** 2026-02-05 02:37:36.842108 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.842127 | orchestrator | 2026-02-05 02:37:36.842143 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-05 02:37:36.842160 | orchestrator | Thursday 05 February 2026 02:37:26 +0000 (0:00:00.118) 0:07:36.955 ***** 2026-02-05 02:37:36.842177 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.842195 | orchestrator | 2026-02-05 02:37:36.842213 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-05 02:37:36.842231 | orchestrator | Thursday 05 February 2026 02:37:26 +0000 (0:00:00.210) 0:07:37.165 ***** 2026-02-05 02:37:36.842249 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.842266 | orchestrator | 2026-02-05 02:37:36.842286 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-05 02:37:36.842305 | orchestrator | Thursday 05 February 2026 02:37:27 +0000 (0:00:00.214) 0:07:37.380 ***** 2026-02-05 02:37:36.842323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:37:36.842343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:37:36.842394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:37:36.842414 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.842435 | orchestrator | 2026-02-05 02:37:36.842453 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-05 02:37:36.842474 | orchestrator | Thursday 05 February 2026 02:37:27 +0000 (0:00:00.365) 0:07:37.745 ***** 2026-02-05 02:37:36.842492 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.842510 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:36.842559 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:36.842577 | orchestrator | 2026-02-05 02:37:36.842596 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-05 02:37:36.842608 | orchestrator | Thursday 05 February 2026 02:37:27 +0000 (0:00:00.277) 0:07:38.022 ***** 2026-02-05 02:37:36.842618 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.842629 | orchestrator | 2026-02-05 02:37:36.842640 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-05 02:37:36.842650 | orchestrator | Thursday 05 February 2026 02:37:28 +0000 (0:00:00.202) 0:07:38.225 ***** 2026-02-05 02:37:36.842661 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.842672 | orchestrator | 2026-02-05 02:37:36.842682 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-05 02:37:36.842693 | orchestrator | 2026-02-05 02:37:36.842704 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 02:37:36.842714 | orchestrator | Thursday 05 February 2026 02:37:29 +0000 (0:00:01.065) 0:07:39.290 ***** 2026-02-05 02:37:36.842727 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:37:36.842746 | orchestrator | 2026-02-05 02:37:36.842771 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 02:37:36.842793 | orchestrator | Thursday 05 February 2026 02:37:30 +0000 (0:00:01.063) 0:07:40.354 ***** 2026-02-05 02:37:36.842810 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:37:36.842852 | orchestrator | 2026-02-05 02:37:36.842872 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 02:37:36.842890 | orchestrator | Thursday 05 February 2026 02:37:31 +0000 (0:00:01.074) 0:07:41.428 ***** 2026-02-05 02:37:36.842909 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.842921 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:36.842932 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:36.842943 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:37:36.842955 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:37:36.842965 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:37:36.842976 | orchestrator | 2026-02-05 02:37:36.842987 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 02:37:36.842998 | orchestrator | Thursday 05 February 2026 02:37:32 +0000 (0:00:01.100) 0:07:42.528 ***** 2026-02-05 02:37:36.843009 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:37:36.843020 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:37:36.843031 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:37:36.843042 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:37:36.843052 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:37:36.843063 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:37:36.843073 | orchestrator | 2026-02-05 02:37:36.843084 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 02:37:36.843095 | orchestrator | Thursday 05 February 2026 02:37:33 +0000 (0:00:00.690) 0:07:43.219 ***** 2026-02-05 02:37:36.843106 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:37:36.843117 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:37:36.843128 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:37:36.843138 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:37:36.843149 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:37:36.843160 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:37:36.843170 | orchestrator | 2026-02-05 02:37:36.843181 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 02:37:36.843192 | orchestrator | Thursday 05 February 2026 02:37:33 +0000 (0:00:00.763) 0:07:43.982 ***** 2026-02-05 02:37:36.843203 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:37:36.843214 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:37:36.843225 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:37:36.843236 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:37:36.843246 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:37:36.843257 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:37:36.843268 | orchestrator | 2026-02-05 02:37:36.843288 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 02:37:36.843299 | orchestrator | Thursday 05 February 2026 02:37:34 +0000 (0:00:00.671) 0:07:44.654 ***** 2026-02-05 02:37:36.843310 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.843321 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:36.843332 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:36.843342 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:37:36.843353 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:37:36.843364 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:37:36.843374 | orchestrator | 2026-02-05 02:37:36.843385 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 02:37:36.843396 | orchestrator | Thursday 05 February 2026 02:37:35 +0000 (0:00:01.142) 0:07:45.796 ***** 2026-02-05 02:37:36.843407 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.843418 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:36.843428 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:36.843439 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:37:36.843450 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:37:36.843461 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:37:36.843472 | orchestrator | 2026-02-05 02:37:36.843482 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 02:37:36.843493 | orchestrator | Thursday 05 February 2026 02:37:36 +0000 (0:00:00.545) 0:07:46.342 ***** 2026-02-05 02:37:36.843504 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:37:36.843548 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:37:36.843560 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:37:36.843571 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:37:36.843594 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:38:08.327253 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:38:08.327344 | orchestrator | 2026-02-05 02:38:08.327354 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 02:38:08.327362 | orchestrator | Thursday 05 February 2026 02:37:36 +0000 (0:00:00.669) 0:07:47.011 ***** 2026-02-05 02:38:08.327369 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:08.327376 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:08.327382 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:08.327388 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:38:08.327394 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:38:08.327400 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:38:08.327406 | orchestrator | 2026-02-05 02:38:08.327412 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 02:38:08.327418 | orchestrator | Thursday 05 February 2026 02:37:38 +0000 (0:00:01.868) 0:07:48.880 ***** 2026-02-05 02:38:08.327424 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:08.327430 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:08.327435 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:08.327490 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:38:08.327497 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:38:08.327503 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:38:08.327508 | orchestrator | 2026-02-05 02:38:08.327515 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 02:38:08.327520 | orchestrator | Thursday 05 February 2026 02:37:39 +0000 (0:00:01.114) 0:07:49.994 ***** 2026-02-05 02:38:08.327526 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:08.327533 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:08.327539 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:08.327545 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:38:08.327551 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:38:08.327557 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:38:08.327563 | orchestrator | 2026-02-05 02:38:08.327569 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 02:38:08.327575 | orchestrator | Thursday 05 February 2026 02:37:40 +0000 (0:00:00.568) 0:07:50.563 ***** 2026-02-05 02:38:08.327581 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:08.327587 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:08.327593 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:08.327599 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:38:08.327604 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:38:08.327610 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:38:08.327616 | orchestrator | 2026-02-05 02:38:08.327622 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 02:38:08.327628 | orchestrator | Thursday 05 February 2026 02:37:41 +0000 (0:00:00.747) 0:07:51.310 ***** 2026-02-05 02:38:08.327634 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:08.327640 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:08.327646 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:08.327652 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:38:08.327658 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:38:08.327663 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:38:08.327669 | orchestrator | 2026-02-05 02:38:08.327675 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 02:38:08.327681 | orchestrator | Thursday 05 February 2026 02:37:41 +0000 (0:00:00.540) 0:07:51.851 ***** 2026-02-05 02:38:08.327687 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:08.327693 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:08.327699 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:08.327705 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:38:08.327711 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:38:08.327736 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:38:08.327742 | orchestrator | 2026-02-05 02:38:08.327748 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 02:38:08.327758 | orchestrator | Thursday 05 February 2026 02:37:42 +0000 (0:00:00.681) 0:07:52.532 ***** 2026-02-05 02:38:08.327771 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:08.327785 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:08.327795 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:08.327805 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:38:08.327814 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:38:08.327824 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:38:08.327834 | orchestrator | 2026-02-05 02:38:08.327844 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 02:38:08.327854 | orchestrator | Thursday 05 February 2026 02:37:42 +0000 (0:00:00.523) 0:07:53.056 ***** 2026-02-05 02:38:08.327863 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:08.327874 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:08.327884 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:08.327894 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:38:08.327905 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:38:08.327916 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:38:08.327926 | orchestrator | 2026-02-05 02:38:08.327935 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 02:38:08.327943 | orchestrator | Thursday 05 February 2026 02:37:43 +0000 (0:00:00.769) 0:07:53.826 ***** 2026-02-05 02:38:08.327953 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:08.327962 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:08.327972 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:08.327982 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:38:08.327991 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:38:08.328002 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:38:08.328012 | orchestrator | 2026-02-05 02:38:08.328022 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 02:38:08.328033 | orchestrator | Thursday 05 February 2026 02:37:44 +0000 (0:00:00.520) 0:07:54.346 ***** 2026-02-05 02:38:08.328044 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:08.328054 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:08.328064 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:08.328071 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:38:08.328078 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:38:08.328085 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:38:08.328092 | orchestrator | 2026-02-05 02:38:08.328099 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 02:38:08.328107 | orchestrator | Thursday 05 February 2026 02:37:44 +0000 (0:00:00.697) 0:07:55.044 ***** 2026-02-05 02:38:08.328113 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:08.328120 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:08.328127 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:08.328133 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:38:08.328152 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:38:08.328160 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:38:08.328167 | orchestrator | 2026-02-05 02:38:08.328173 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 02:38:08.328181 | orchestrator | Thursday 05 February 2026 02:37:45 +0000 (0:00:00.566) 0:07:55.611 ***** 2026-02-05 02:38:08.328187 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:08.328193 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:08.328198 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:08.328235 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:38:08.328242 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:38:08.328248 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:38:08.328254 | orchestrator | 2026-02-05 02:38:08.328260 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-05 02:38:08.328266 | orchestrator | Thursday 05 February 2026 02:37:46 +0000 (0:00:01.239) 0:07:56.850 ***** 2026-02-05 02:38:08.328280 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 02:38:08.328286 | orchestrator | 2026-02-05 02:38:08.328292 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-05 02:38:08.328298 | orchestrator | Thursday 05 February 2026 02:37:50 +0000 (0:00:04.254) 0:08:01.105 ***** 2026-02-05 02:38:08.328304 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 02:38:08.328310 | orchestrator | 2026-02-05 02:38:08.328316 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-05 02:38:08.328321 | orchestrator | Thursday 05 February 2026 02:37:53 +0000 (0:00:02.098) 0:08:03.203 ***** 2026-02-05 02:38:08.328327 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:38:08.328333 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:38:08.328339 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:38:08.328345 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:38:08.328350 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:38:08.328356 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:38:08.328362 | orchestrator | 2026-02-05 02:38:08.328368 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-05 02:38:08.328373 | orchestrator | Thursday 05 February 2026 02:37:54 +0000 (0:00:01.766) 0:08:04.969 ***** 2026-02-05 02:38:08.328379 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:38:08.328385 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:38:08.328391 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:38:08.328396 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:38:08.328402 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:38:08.328408 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:38:08.328413 | orchestrator | 2026-02-05 02:38:08.328419 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-05 02:38:08.328425 | orchestrator | Thursday 05 February 2026 02:37:55 +0000 (0:00:00.988) 0:08:05.957 ***** 2026-02-05 02:38:08.328432 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:38:08.328439 | orchestrator | 2026-02-05 02:38:08.328487 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-05 02:38:08.328493 | orchestrator | Thursday 05 February 2026 02:37:57 +0000 (0:00:01.349) 0:08:07.307 ***** 2026-02-05 02:38:08.328499 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:38:08.328505 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:38:08.328510 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:38:08.328516 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:38:08.328522 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:38:08.328528 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:38:08.328533 | orchestrator | 2026-02-05 02:38:08.328539 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-05 02:38:08.328545 | orchestrator | Thursday 05 February 2026 02:37:58 +0000 (0:00:01.791) 0:08:09.098 ***** 2026-02-05 02:38:08.328551 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:38:08.328557 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:38:08.328562 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:38:08.328568 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:38:08.328574 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:38:08.328580 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:38:08.328585 | orchestrator | 2026-02-05 02:38:08.328591 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-05 02:38:08.328597 | orchestrator | Thursday 05 February 2026 02:38:02 +0000 (0:00:03.237) 0:08:12.336 ***** 2026-02-05 02:38:08.328607 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:38:08.328613 | orchestrator | 2026-02-05 02:38:08.328619 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-05 02:38:08.328625 | orchestrator | Thursday 05 February 2026 02:38:03 +0000 (0:00:01.392) 0:08:13.728 ***** 2026-02-05 02:38:08.328635 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:08.328641 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:08.328646 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:08.328652 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:38:08.328658 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:38:08.328664 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:38:08.328669 | orchestrator | 2026-02-05 02:38:08.328675 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-05 02:38:08.328681 | orchestrator | Thursday 05 February 2026 02:38:04 +0000 (0:00:00.883) 0:08:14.611 ***** 2026-02-05 02:38:08.328687 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:38:08.328693 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:38:08.328699 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:38:08.328704 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:38:08.328710 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:38:08.328716 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:38:08.328722 | orchestrator | 2026-02-05 02:38:08.328727 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-05 02:38:08.328733 | orchestrator | Thursday 05 February 2026 02:38:07 +0000 (0:00:02.974) 0:08:17.586 ***** 2026-02-05 02:38:08.328739 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:08.328747 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:08.328757 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:08.328766 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:38:08.328787 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:38:36.043123 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:38:36.043209 | orchestrator | 2026-02-05 02:38:36.043220 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-05 02:38:36.043228 | orchestrator | 2026-02-05 02:38:36.043236 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 02:38:36.043243 | orchestrator | Thursday 05 February 2026 02:38:08 +0000 (0:00:00.916) 0:08:18.502 ***** 2026-02-05 02:38:36.043250 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:38:36.043257 | orchestrator | 2026-02-05 02:38:36.043264 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 02:38:36.043271 | orchestrator | Thursday 05 February 2026 02:38:09 +0000 (0:00:00.790) 0:08:19.292 ***** 2026-02-05 02:38:36.043278 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:38:36.043284 | orchestrator | 2026-02-05 02:38:36.043290 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 02:38:36.043297 | orchestrator | Thursday 05 February 2026 02:38:09 +0000 (0:00:00.535) 0:08:19.828 ***** 2026-02-05 02:38:36.043303 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:36.043310 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:36.043316 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:36.043322 | orchestrator | 2026-02-05 02:38:36.043329 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 02:38:36.043335 | orchestrator | Thursday 05 February 2026 02:38:10 +0000 (0:00:00.541) 0:08:20.370 ***** 2026-02-05 02:38:36.043341 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:36.043348 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:36.043386 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:36.043393 | orchestrator | 2026-02-05 02:38:36.043399 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 02:38:36.043405 | orchestrator | Thursday 05 February 2026 02:38:10 +0000 (0:00:00.738) 0:08:21.109 ***** 2026-02-05 02:38:36.043412 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:36.043418 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:36.043424 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:36.043431 | orchestrator | 2026-02-05 02:38:36.043437 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 02:38:36.043461 | orchestrator | Thursday 05 February 2026 02:38:11 +0000 (0:00:00.718) 0:08:21.828 ***** 2026-02-05 02:38:36.043468 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:36.043474 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:36.043480 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:36.043486 | orchestrator | 2026-02-05 02:38:36.043493 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 02:38:36.043499 | orchestrator | Thursday 05 February 2026 02:38:12 +0000 (0:00:00.721) 0:08:22.549 ***** 2026-02-05 02:38:36.043505 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:36.043511 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:36.043517 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:36.043524 | orchestrator | 2026-02-05 02:38:36.043530 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 02:38:36.043536 | orchestrator | Thursday 05 February 2026 02:38:12 +0000 (0:00:00.570) 0:08:23.120 ***** 2026-02-05 02:38:36.043542 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:36.043549 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:36.043555 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:36.043561 | orchestrator | 2026-02-05 02:38:36.043567 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 02:38:36.043574 | orchestrator | Thursday 05 February 2026 02:38:13 +0000 (0:00:00.361) 0:08:23.481 ***** 2026-02-05 02:38:36.043580 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:36.043586 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:36.043592 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:36.043598 | orchestrator | 2026-02-05 02:38:36.043604 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 02:38:36.043611 | orchestrator | Thursday 05 February 2026 02:38:13 +0000 (0:00:00.349) 0:08:23.830 ***** 2026-02-05 02:38:36.043617 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:36.043623 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:36.043629 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:36.043636 | orchestrator | 2026-02-05 02:38:36.043642 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 02:38:36.043660 | orchestrator | Thursday 05 February 2026 02:38:14 +0000 (0:00:00.734) 0:08:24.565 ***** 2026-02-05 02:38:36.043666 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:36.043672 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:36.043678 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:36.043685 | orchestrator | 2026-02-05 02:38:36.043691 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 02:38:36.043698 | orchestrator | Thursday 05 February 2026 02:38:15 +0000 (0:00:01.098) 0:08:25.664 ***** 2026-02-05 02:38:36.043706 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:36.043713 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:36.043721 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:36.043728 | orchestrator | 2026-02-05 02:38:36.043736 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 02:38:36.043743 | orchestrator | Thursday 05 February 2026 02:38:15 +0000 (0:00:00.304) 0:08:25.969 ***** 2026-02-05 02:38:36.043750 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:36.043757 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:36.043764 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:36.043772 | orchestrator | 2026-02-05 02:38:36.043779 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 02:38:36.043786 | orchestrator | Thursday 05 February 2026 02:38:16 +0000 (0:00:00.320) 0:08:26.289 ***** 2026-02-05 02:38:36.043793 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:36.043800 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:36.043808 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:36.043815 | orchestrator | 2026-02-05 02:38:36.043823 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 02:38:36.043829 | orchestrator | Thursday 05 February 2026 02:38:16 +0000 (0:00:00.374) 0:08:26.664 ***** 2026-02-05 02:38:36.043853 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:36.043860 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:36.043866 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:36.043872 | orchestrator | 2026-02-05 02:38:36.043879 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 02:38:36.043885 | orchestrator | Thursday 05 February 2026 02:38:17 +0000 (0:00:00.577) 0:08:27.241 ***** 2026-02-05 02:38:36.043892 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:36.043898 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:36.043904 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:36.043910 | orchestrator | 2026-02-05 02:38:36.043917 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 02:38:36.043923 | orchestrator | Thursday 05 February 2026 02:38:17 +0000 (0:00:00.366) 0:08:27.608 ***** 2026-02-05 02:38:36.043930 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:36.043940 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:36.043949 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:36.043959 | orchestrator | 2026-02-05 02:38:36.043970 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 02:38:36.043981 | orchestrator | Thursday 05 February 2026 02:38:17 +0000 (0:00:00.326) 0:08:27.934 ***** 2026-02-05 02:38:36.043990 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:36.044000 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:36.044010 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:36.044019 | orchestrator | 2026-02-05 02:38:36.044028 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 02:38:36.044038 | orchestrator | Thursday 05 February 2026 02:38:18 +0000 (0:00:00.307) 0:08:28.241 ***** 2026-02-05 02:38:36.044047 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:36.044057 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:36.044067 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:36.044076 | orchestrator | 2026-02-05 02:38:36.044086 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 02:38:36.044097 | orchestrator | Thursday 05 February 2026 02:38:18 +0000 (0:00:00.579) 0:08:28.821 ***** 2026-02-05 02:38:36.044107 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:36.044118 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:36.044129 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:36.044139 | orchestrator | 2026-02-05 02:38:36.044148 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 02:38:36.044158 | orchestrator | Thursday 05 February 2026 02:38:18 +0000 (0:00:00.347) 0:08:29.168 ***** 2026-02-05 02:38:36.044168 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:38:36.044177 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:38:36.044187 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:38:36.044197 | orchestrator | 2026-02-05 02:38:36.044207 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-05 02:38:36.044217 | orchestrator | Thursday 05 February 2026 02:38:19 +0000 (0:00:00.582) 0:08:29.751 ***** 2026-02-05 02:38:36.044226 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:38:36.044236 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:38:36.044246 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-05 02:38:36.044256 | orchestrator | 2026-02-05 02:38:36.044266 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-05 02:38:36.044276 | orchestrator | Thursday 05 February 2026 02:38:20 +0000 (0:00:00.670) 0:08:30.421 ***** 2026-02-05 02:38:36.044286 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 02:38:36.044295 | orchestrator | 2026-02-05 02:38:36.044305 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-05 02:38:36.044314 | orchestrator | Thursday 05 February 2026 02:38:22 +0000 (0:00:02.171) 0:08:32.592 ***** 2026-02-05 02:38:36.044326 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-05 02:38:36.044345 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:38:36.044373 | orchestrator | 2026-02-05 02:38:36.044383 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-05 02:38:36.044393 | orchestrator | Thursday 05 February 2026 02:38:22 +0000 (0:00:00.226) 0:08:32.819 ***** 2026-02-05 02:38:36.044412 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 02:38:36.044430 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 02:38:36.044439 | orchestrator | 2026-02-05 02:38:36.044449 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-05 02:38:36.044459 | orchestrator | Thursday 05 February 2026 02:38:30 +0000 (0:00:07.841) 0:08:40.661 ***** 2026-02-05 02:38:36.044468 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 02:38:36.044478 | orchestrator | 2026-02-05 02:38:36.044487 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-05 02:38:36.044497 | orchestrator | Thursday 05 February 2026 02:38:34 +0000 (0:00:03.695) 0:08:44.356 ***** 2026-02-05 02:38:36.044507 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:38:36.044517 | orchestrator | 2026-02-05 02:38:36.044555 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-05 02:38:36.044565 | orchestrator | Thursday 05 February 2026 02:38:34 +0000 (0:00:00.818) 0:08:45.175 ***** 2026-02-05 02:38:36.044584 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-05 02:39:00.744954 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-05 02:39:00.745091 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-05 02:39:00.745119 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-05 02:39:00.745141 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-05 02:39:00.745153 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-05 02:39:00.745168 | orchestrator | 2026-02-05 02:39:00.745190 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-05 02:39:00.745209 | orchestrator | Thursday 05 February 2026 02:38:36 +0000 (0:00:01.041) 0:08:46.216 ***** 2026-02-05 02:39:00.745228 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:39:00.745242 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 02:39:00.745257 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 02:39:00.745300 | orchestrator | 2026-02-05 02:39:00.745320 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-05 02:39:00.745339 | orchestrator | Thursday 05 February 2026 02:38:38 +0000 (0:00:02.211) 0:08:48.428 ***** 2026-02-05 02:39:00.745359 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 02:39:00.745378 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 02:39:00.745390 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:39:00.745402 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 02:39:00.745416 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 02:39:00.745437 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:39:00.745458 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 02:39:00.745473 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 02:39:00.745517 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:39:00.745540 | orchestrator | 2026-02-05 02:39:00.745558 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-05 02:39:00.745570 | orchestrator | Thursday 05 February 2026 02:38:39 +0000 (0:00:01.221) 0:08:49.650 ***** 2026-02-05 02:39:00.745581 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:39:00.745595 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:39:00.745614 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:39:00.745634 | orchestrator | 2026-02-05 02:39:00.745652 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-05 02:39:00.745672 | orchestrator | Thursday 05 February 2026 02:38:42 +0000 (0:00:02.979) 0:08:52.629 ***** 2026-02-05 02:39:00.745690 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:39:00.745709 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:39:00.745727 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:39:00.745746 | orchestrator | 2026-02-05 02:39:00.745766 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-05 02:39:00.745784 | orchestrator | Thursday 05 February 2026 02:38:42 +0000 (0:00:00.335) 0:08:52.964 ***** 2026-02-05 02:39:00.745795 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:39:00.745807 | orchestrator | 2026-02-05 02:39:00.745818 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-05 02:39:00.745829 | orchestrator | Thursday 05 February 2026 02:38:43 +0000 (0:00:00.608) 0:08:53.572 ***** 2026-02-05 02:39:00.745840 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:39:00.745852 | orchestrator | 2026-02-05 02:39:00.745863 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-05 02:39:00.745873 | orchestrator | Thursday 05 February 2026 02:38:44 +0000 (0:00:00.863) 0:08:54.435 ***** 2026-02-05 02:39:00.745884 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:39:00.745895 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:39:00.745906 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:39:00.745917 | orchestrator | 2026-02-05 02:39:00.745928 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-05 02:39:00.745939 | orchestrator | Thursday 05 February 2026 02:38:45 +0000 (0:00:01.274) 0:08:55.710 ***** 2026-02-05 02:39:00.745965 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:39:00.745977 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:39:00.745988 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:39:00.745999 | orchestrator | 2026-02-05 02:39:00.746087 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-05 02:39:00.746107 | orchestrator | Thursday 05 February 2026 02:38:46 +0000 (0:00:01.208) 0:08:56.918 ***** 2026-02-05 02:39:00.746127 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:39:00.746155 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:39:00.746174 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:39:00.746192 | orchestrator | 2026-02-05 02:39:00.746209 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-05 02:39:00.746226 | orchestrator | Thursday 05 February 2026 02:38:48 +0000 (0:00:01.970) 0:08:58.888 ***** 2026-02-05 02:39:00.746244 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:39:00.746263 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:39:00.746326 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:39:00.746347 | orchestrator | 2026-02-05 02:39:00.746367 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-05 02:39:00.746387 | orchestrator | Thursday 05 February 2026 02:38:50 +0000 (0:00:01.873) 0:09:00.762 ***** 2026-02-05 02:39:00.746403 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:00.746415 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:00.746426 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:00.746437 | orchestrator | 2026-02-05 02:39:00.746448 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 02:39:00.746474 | orchestrator | Thursday 05 February 2026 02:38:52 +0000 (0:00:01.468) 0:09:02.231 ***** 2026-02-05 02:39:00.746485 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:39:00.746496 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:39:00.746529 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:39:00.746541 | orchestrator | 2026-02-05 02:39:00.746552 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-05 02:39:00.746563 | orchestrator | Thursday 05 February 2026 02:38:52 +0000 (0:00:00.709) 0:09:02.940 ***** 2026-02-05 02:39:00.746574 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:39:00.746585 | orchestrator | 2026-02-05 02:39:00.746596 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-05 02:39:00.746607 | orchestrator | Thursday 05 February 2026 02:38:53 +0000 (0:00:00.520) 0:09:03.461 ***** 2026-02-05 02:39:00.746618 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:00.746629 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:00.746640 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:00.746651 | orchestrator | 2026-02-05 02:39:00.746662 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-05 02:39:00.746672 | orchestrator | Thursday 05 February 2026 02:38:53 +0000 (0:00:00.567) 0:09:04.028 ***** 2026-02-05 02:39:00.746683 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:39:00.746694 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:39:00.746705 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:39:00.746716 | orchestrator | 2026-02-05 02:39:00.746727 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-05 02:39:00.746738 | orchestrator | Thursday 05 February 2026 02:38:54 +0000 (0:00:01.134) 0:09:05.162 ***** 2026-02-05 02:39:00.746749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:39:00.746761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:39:00.746772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:39:00.746783 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:39:00.746794 | orchestrator | 2026-02-05 02:39:00.746805 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-05 02:39:00.746816 | orchestrator | Thursday 05 February 2026 02:38:55 +0000 (0:00:00.572) 0:09:05.735 ***** 2026-02-05 02:39:00.746827 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:00.746838 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:00.746849 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:00.746860 | orchestrator | 2026-02-05 02:39:00.746871 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-05 02:39:00.746882 | orchestrator | 2026-02-05 02:39:00.746893 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 02:39:00.746907 | orchestrator | Thursday 05 February 2026 02:38:56 +0000 (0:00:00.677) 0:09:06.412 ***** 2026-02-05 02:39:00.746925 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:39:00.746957 | orchestrator | 2026-02-05 02:39:00.746976 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 02:39:00.746993 | orchestrator | Thursday 05 February 2026 02:38:56 +0000 (0:00:00.473) 0:09:06.886 ***** 2026-02-05 02:39:00.747011 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:39:00.747028 | orchestrator | 2026-02-05 02:39:00.747046 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 02:39:00.747065 | orchestrator | Thursday 05 February 2026 02:38:57 +0000 (0:00:00.607) 0:09:07.494 ***** 2026-02-05 02:39:00.747082 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:39:00.747102 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:39:00.747122 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:39:00.747152 | orchestrator | 2026-02-05 02:39:00.747164 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 02:39:00.747175 | orchestrator | Thursday 05 February 2026 02:38:57 +0000 (0:00:00.306) 0:09:07.800 ***** 2026-02-05 02:39:00.747186 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:00.747197 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:00.747208 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:00.747219 | orchestrator | 2026-02-05 02:39:00.747230 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 02:39:00.747241 | orchestrator | Thursday 05 February 2026 02:38:58 +0000 (0:00:00.675) 0:09:08.476 ***** 2026-02-05 02:39:00.747252 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:00.747263 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:00.747369 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:00.747385 | orchestrator | 2026-02-05 02:39:00.747396 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 02:39:00.747407 | orchestrator | Thursday 05 February 2026 02:38:58 +0000 (0:00:00.658) 0:09:09.134 ***** 2026-02-05 02:39:00.747419 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:00.747429 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:00.747441 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:00.747451 | orchestrator | 2026-02-05 02:39:00.747462 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 02:39:00.747473 | orchestrator | Thursday 05 February 2026 02:38:59 +0000 (0:00:00.911) 0:09:10.046 ***** 2026-02-05 02:39:00.747484 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:39:00.747495 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:39:00.747506 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:39:00.747517 | orchestrator | 2026-02-05 02:39:00.747528 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 02:39:00.747539 | orchestrator | Thursday 05 February 2026 02:39:00 +0000 (0:00:00.265) 0:09:10.311 ***** 2026-02-05 02:39:00.747550 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:39:00.747561 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:39:00.747572 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:39:00.747583 | orchestrator | 2026-02-05 02:39:00.747594 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 02:39:00.747605 | orchestrator | Thursday 05 February 2026 02:39:00 +0000 (0:00:00.286) 0:09:10.598 ***** 2026-02-05 02:39:00.747616 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:39:00.747627 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:39:00.747638 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:39:00.747649 | orchestrator | 2026-02-05 02:39:00.747671 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 02:39:23.026549 | orchestrator | Thursday 05 February 2026 02:39:00 +0000 (0:00:00.318) 0:09:10.917 ***** 2026-02-05 02:39:23.026696 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:23.026724 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:23.026738 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:23.026749 | orchestrator | 2026-02-05 02:39:23.026761 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 02:39:23.026773 | orchestrator | Thursday 05 February 2026 02:39:01 +0000 (0:00:00.901) 0:09:11.818 ***** 2026-02-05 02:39:23.026784 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:23.026796 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:23.026806 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:23.026817 | orchestrator | 2026-02-05 02:39:23.026829 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 02:39:23.026840 | orchestrator | Thursday 05 February 2026 02:39:02 +0000 (0:00:00.770) 0:09:12.589 ***** 2026-02-05 02:39:23.026851 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:39:23.026864 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:39:23.026875 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:39:23.026886 | orchestrator | 2026-02-05 02:39:23.026897 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 02:39:23.026938 | orchestrator | Thursday 05 February 2026 02:39:02 +0000 (0:00:00.400) 0:09:12.989 ***** 2026-02-05 02:39:23.026958 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:39:23.026976 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:39:23.026996 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:39:23.027015 | orchestrator | 2026-02-05 02:39:23.027035 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 02:39:23.027055 | orchestrator | Thursday 05 February 2026 02:39:03 +0000 (0:00:00.398) 0:09:13.388 ***** 2026-02-05 02:39:23.027074 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:23.027091 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:23.027105 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:23.027118 | orchestrator | 2026-02-05 02:39:23.027132 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 02:39:23.027145 | orchestrator | Thursday 05 February 2026 02:39:03 +0000 (0:00:00.659) 0:09:14.047 ***** 2026-02-05 02:39:23.027158 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:23.027170 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:23.027183 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:23.027196 | orchestrator | 2026-02-05 02:39:23.027252 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 02:39:23.027268 | orchestrator | Thursday 05 February 2026 02:39:04 +0000 (0:00:00.348) 0:09:14.395 ***** 2026-02-05 02:39:23.027281 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:23.027294 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:23.027312 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:23.027331 | orchestrator | 2026-02-05 02:39:23.027350 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 02:39:23.027370 | orchestrator | Thursday 05 February 2026 02:39:04 +0000 (0:00:00.329) 0:09:14.725 ***** 2026-02-05 02:39:23.027389 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:39:23.027409 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:39:23.027429 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:39:23.027448 | orchestrator | 2026-02-05 02:39:23.027462 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 02:39:23.027474 | orchestrator | Thursday 05 February 2026 02:39:04 +0000 (0:00:00.305) 0:09:15.030 ***** 2026-02-05 02:39:23.027485 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:39:23.027496 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:39:23.027507 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:39:23.027517 | orchestrator | 2026-02-05 02:39:23.027529 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 02:39:23.027539 | orchestrator | Thursday 05 February 2026 02:39:05 +0000 (0:00:00.586) 0:09:15.617 ***** 2026-02-05 02:39:23.027550 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:39:23.027561 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:39:23.027572 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:39:23.027586 | orchestrator | 2026-02-05 02:39:23.027604 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 02:39:23.027622 | orchestrator | Thursday 05 February 2026 02:39:05 +0000 (0:00:00.344) 0:09:15.961 ***** 2026-02-05 02:39:23.027640 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:23.027659 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:23.027670 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:23.027687 | orchestrator | 2026-02-05 02:39:23.027725 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 02:39:23.027746 | orchestrator | Thursday 05 February 2026 02:39:06 +0000 (0:00:00.361) 0:09:16.322 ***** 2026-02-05 02:39:23.027765 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:39:23.027785 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:39:23.027803 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:39:23.027821 | orchestrator | 2026-02-05 02:39:23.027832 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-05 02:39:23.027843 | orchestrator | Thursday 05 February 2026 02:39:06 +0000 (0:00:00.831) 0:09:17.154 ***** 2026-02-05 02:39:23.027866 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:39:23.027878 | orchestrator | 2026-02-05 02:39:23.027889 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-05 02:39:23.027900 | orchestrator | Thursday 05 February 2026 02:39:07 +0000 (0:00:00.559) 0:09:17.714 ***** 2026-02-05 02:39:23.027910 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:39:23.027928 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 02:39:23.027947 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 02:39:23.027964 | orchestrator | 2026-02-05 02:39:23.027982 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-05 02:39:23.028000 | orchestrator | Thursday 05 February 2026 02:39:09 +0000 (0:00:02.245) 0:09:19.959 ***** 2026-02-05 02:39:23.028018 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 02:39:23.028037 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 02:39:23.028055 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:39:23.028098 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 02:39:23.028118 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 02:39:23.028136 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:39:23.028155 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 02:39:23.028173 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 02:39:23.028192 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:39:23.028236 | orchestrator | 2026-02-05 02:39:23.028256 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-05 02:39:23.028272 | orchestrator | Thursday 05 February 2026 02:39:11 +0000 (0:00:01.433) 0:09:21.392 ***** 2026-02-05 02:39:23.028284 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:39:23.028294 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:39:23.028305 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:39:23.028316 | orchestrator | 2026-02-05 02:39:23.028327 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-05 02:39:23.028338 | orchestrator | Thursday 05 February 2026 02:39:11 +0000 (0:00:00.341) 0:09:21.734 ***** 2026-02-05 02:39:23.028349 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:39:23.028360 | orchestrator | 2026-02-05 02:39:23.028371 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-05 02:39:23.028381 | orchestrator | Thursday 05 February 2026 02:39:12 +0000 (0:00:00.563) 0:09:22.297 ***** 2026-02-05 02:39:23.028394 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 02:39:23.028407 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 02:39:23.028418 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 02:39:23.028429 | orchestrator | 2026-02-05 02:39:23.028440 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-05 02:39:23.028450 | orchestrator | Thursday 05 February 2026 02:39:13 +0000 (0:00:01.366) 0:09:23.663 ***** 2026-02-05 02:39:23.028467 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:39:23.028486 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-05 02:39:23.028504 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:39:23.028523 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-05 02:39:23.028558 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:39:23.028578 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-05 02:39:23.028596 | orchestrator | 2026-02-05 02:39:23.028615 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-05 02:39:23.028635 | orchestrator | Thursday 05 February 2026 02:39:18 +0000 (0:00:04.800) 0:09:28.463 ***** 2026-02-05 02:39:23.028653 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:39:23.028666 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:39:23.028677 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 02:39:23.028688 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 02:39:23.028699 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:39:23.028717 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 02:39:23.028728 | orchestrator | 2026-02-05 02:39:23.028739 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-05 02:39:23.028750 | orchestrator | Thursday 05 February 2026 02:39:20 +0000 (0:00:02.409) 0:09:30.873 ***** 2026-02-05 02:39:23.028761 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 02:39:23.028772 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:39:23.028783 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 02:39:23.028794 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:39:23.028805 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 02:39:23.028815 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:39:23.028828 | orchestrator | 2026-02-05 02:39:23.028847 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-05 02:39:23.028865 | orchestrator | Thursday 05 February 2026 02:39:21 +0000 (0:00:01.170) 0:09:32.043 ***** 2026-02-05 02:39:23.028883 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-05 02:39:23.028902 | orchestrator | 2026-02-05 02:39:23.028921 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-05 02:39:23.028941 | orchestrator | Thursday 05 February 2026 02:39:22 +0000 (0:00:00.523) 0:09:32.567 ***** 2026-02-05 02:39:23.028960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 02:39:23.028974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 02:39:23.028994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 02:40:06.923028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 02:40:06.923177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 02:40:06.923195 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:06.923209 | orchestrator | 2026-02-05 02:40:06.923222 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-05 02:40:06.923234 | orchestrator | Thursday 05 February 2026 02:39:23 +0000 (0:00:00.623) 0:09:33.191 ***** 2026-02-05 02:40:06.923246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 02:40:06.923257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 02:40:06.923269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 02:40:06.923315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 02:40:06.923335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 02:40:06.923355 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:06.923374 | orchestrator | 2026-02-05 02:40:06.923392 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-05 02:40:06.923408 | orchestrator | Thursday 05 February 2026 02:39:23 +0000 (0:00:00.650) 0:09:33.841 ***** 2026-02-05 02:40:06.923419 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 02:40:06.923432 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 02:40:06.923443 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 02:40:06.923454 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 02:40:06.923465 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 02:40:06.923477 | orchestrator | 2026-02-05 02:40:06.923488 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-05 02:40:06.923499 | orchestrator | Thursday 05 February 2026 02:39:54 +0000 (0:00:30.518) 0:10:04.360 ***** 2026-02-05 02:40:06.923524 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:06.923535 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:06.923546 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:06.923557 | orchestrator | 2026-02-05 02:40:06.923570 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-05 02:40:06.923584 | orchestrator | Thursday 05 February 2026 02:39:54 +0000 (0:00:00.307) 0:10:04.667 ***** 2026-02-05 02:40:06.923597 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:06.923610 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:06.923624 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:06.923637 | orchestrator | 2026-02-05 02:40:06.923669 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-05 02:40:06.923689 | orchestrator | Thursday 05 February 2026 02:39:54 +0000 (0:00:00.313) 0:10:04.981 ***** 2026-02-05 02:40:06.923709 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:40:06.923728 | orchestrator | 2026-02-05 02:40:06.923749 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-05 02:40:06.923769 | orchestrator | Thursday 05 February 2026 02:39:55 +0000 (0:00:00.867) 0:10:05.848 ***** 2026-02-05 02:40:06.923789 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:40:06.923809 | orchestrator | 2026-02-05 02:40:06.923828 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-05 02:40:06.923849 | orchestrator | Thursday 05 February 2026 02:39:56 +0000 (0:00:00.569) 0:10:06.418 ***** 2026-02-05 02:40:06.923868 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:40:06.923888 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:40:06.923900 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:40:06.923911 | orchestrator | 2026-02-05 02:40:06.923922 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-05 02:40:06.923932 | orchestrator | Thursday 05 February 2026 02:39:58 +0000 (0:00:01.901) 0:10:08.319 ***** 2026-02-05 02:40:06.923955 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:40:06.923966 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:40:06.923983 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:40:06.923999 | orchestrator | 2026-02-05 02:40:06.924010 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-05 02:40:06.924021 | orchestrator | Thursday 05 February 2026 02:39:59 +0000 (0:00:01.218) 0:10:09.537 ***** 2026-02-05 02:40:06.924032 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:40:06.924064 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:40:06.924076 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:40:06.924155 | orchestrator | 2026-02-05 02:40:06.924188 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-05 02:40:06.924205 | orchestrator | Thursday 05 February 2026 02:40:01 +0000 (0:00:01.788) 0:10:11.325 ***** 2026-02-05 02:40:06.924222 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 02:40:06.924240 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 02:40:06.924267 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 02:40:06.924288 | orchestrator | 2026-02-05 02:40:06.924307 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 02:40:06.924326 | orchestrator | Thursday 05 February 2026 02:40:03 +0000 (0:00:02.691) 0:10:14.017 ***** 2026-02-05 02:40:06.924345 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:06.924364 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:06.924376 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:06.924387 | orchestrator | 2026-02-05 02:40:06.924398 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-05 02:40:06.924409 | orchestrator | Thursday 05 February 2026 02:40:04 +0000 (0:00:00.362) 0:10:14.379 ***** 2026-02-05 02:40:06.924420 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:40:06.924431 | orchestrator | 2026-02-05 02:40:06.924442 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-05 02:40:06.924453 | orchestrator | Thursday 05 February 2026 02:40:04 +0000 (0:00:00.793) 0:10:15.173 ***** 2026-02-05 02:40:06.924464 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:06.924476 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:06.924486 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:06.924497 | orchestrator | 2026-02-05 02:40:06.924508 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-05 02:40:06.924526 | orchestrator | Thursday 05 February 2026 02:40:05 +0000 (0:00:00.344) 0:10:15.518 ***** 2026-02-05 02:40:06.924545 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:06.924577 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:06.924604 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:06.924625 | orchestrator | 2026-02-05 02:40:06.924643 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-05 02:40:06.924663 | orchestrator | Thursday 05 February 2026 02:40:05 +0000 (0:00:00.396) 0:10:15.915 ***** 2026-02-05 02:40:06.924682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:40:06.924701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:40:06.924720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:40:06.924733 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:06.924752 | orchestrator | 2026-02-05 02:40:06.924769 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-05 02:40:06.924788 | orchestrator | Thursday 05 February 2026 02:40:06 +0000 (0:00:00.912) 0:10:16.827 ***** 2026-02-05 02:40:06.924806 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:06.924826 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:06.924858 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:06.924878 | orchestrator | 2026-02-05 02:40:06.924891 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:40:06.924902 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-05 02:40:06.924924 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-05 02:40:06.924935 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-05 02:40:06.924946 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-05 02:40:06.924957 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-05 02:40:06.924968 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-05 02:40:06.924979 | orchestrator | 2026-02-05 02:40:06.924990 | orchestrator | 2026-02-05 02:40:06.925001 | orchestrator | 2026-02-05 02:40:06.925012 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:40:06.925023 | orchestrator | Thursday 05 February 2026 02:40:06 +0000 (0:00:00.249) 0:10:17.076 ***** 2026-02-05 02:40:06.925034 | orchestrator | =============================================================================== 2026-02-05 02:40:06.925045 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 52.67s 2026-02-05 02:40:06.925056 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.40s 2026-02-05 02:40:06.925067 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.52s 2026-02-05 02:40:06.925078 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.31s 2026-02-05 02:40:06.925116 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.98s 2026-02-05 02:40:06.925140 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.59s 2026-02-05 02:40:07.489544 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.09s 2026-02-05 02:40:07.489642 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.51s 2026-02-05 02:40:07.489656 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.84s 2026-02-05 02:40:07.489667 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.44s 2026-02-05 02:40:07.489678 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.14s 2026-02-05 02:40:07.489689 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.17s 2026-02-05 02:40:07.489700 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.80s 2026-02-05 02:40:07.489711 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.25s 2026-02-05 02:40:07.489722 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.77s 2026-02-05 02:40:07.489734 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.70s 2026-02-05 02:40:07.489745 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.33s 2026-02-05 02:40:07.489755 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.24s 2026-02-05 02:40:07.489765 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.20s 2026-02-05 02:40:07.489776 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.09s 2026-02-05 02:40:09.905035 | orchestrator | 2026-02-05 02:40:09 | INFO  | Task 4238c07e-64d7-49a7-8032-506a31b629b1 (ceph-pools) was prepared for execution. 2026-02-05 02:40:09.905204 | orchestrator | 2026-02-05 02:40:09 | INFO  | It takes a moment until task 4238c07e-64d7-49a7-8032-506a31b629b1 (ceph-pools) has been started and output is visible here. 2026-02-05 02:40:23.139970 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 02:40:23.140148 | orchestrator | 2.16.14 2026-02-05 02:40:23.140164 | orchestrator | 2026-02-05 02:40:23.140174 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-05 02:40:23.140181 | orchestrator | 2026-02-05 02:40:23.140188 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 02:40:23.140194 | orchestrator | Thursday 05 February 2026 02:40:14 +0000 (0:00:00.604) 0:00:00.604 ***** 2026-02-05 02:40:23.140200 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:40:23.140208 | orchestrator | 2026-02-05 02:40:23.140223 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 02:40:23.140270 | orchestrator | Thursday 05 February 2026 02:40:15 +0000 (0:00:00.659) 0:00:01.263 ***** 2026-02-05 02:40:23.140277 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:23.140284 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:23.140299 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:23.140306 | orchestrator | 2026-02-05 02:40:23.140313 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 02:40:23.140320 | orchestrator | Thursday 05 February 2026 02:40:15 +0000 (0:00:00.633) 0:00:01.897 ***** 2026-02-05 02:40:23.140326 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:23.140333 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:23.140340 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:23.140346 | orchestrator | 2026-02-05 02:40:23.140353 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 02:40:23.140360 | orchestrator | Thursday 05 February 2026 02:40:15 +0000 (0:00:00.269) 0:00:02.167 ***** 2026-02-05 02:40:23.140367 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:23.140373 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:23.140380 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:23.140387 | orchestrator | 2026-02-05 02:40:23.140408 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 02:40:23.140416 | orchestrator | Thursday 05 February 2026 02:40:16 +0000 (0:00:00.726) 0:00:02.894 ***** 2026-02-05 02:40:23.140422 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:23.140429 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:23.140436 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:23.140443 | orchestrator | 2026-02-05 02:40:23.140450 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 02:40:23.140457 | orchestrator | Thursday 05 February 2026 02:40:16 +0000 (0:00:00.277) 0:00:03.171 ***** 2026-02-05 02:40:23.140464 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:23.140471 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:23.140477 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:23.140484 | orchestrator | 2026-02-05 02:40:23.140492 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 02:40:23.140498 | orchestrator | Thursday 05 February 2026 02:40:17 +0000 (0:00:00.307) 0:00:03.478 ***** 2026-02-05 02:40:23.140504 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:23.140510 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:23.140517 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:23.140523 | orchestrator | 2026-02-05 02:40:23.140530 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 02:40:23.140536 | orchestrator | Thursday 05 February 2026 02:40:17 +0000 (0:00:00.284) 0:00:03.763 ***** 2026-02-05 02:40:23.140542 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:23.140550 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:23.140556 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:23.140562 | orchestrator | 2026-02-05 02:40:23.140569 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 02:40:23.140597 | orchestrator | Thursday 05 February 2026 02:40:17 +0000 (0:00:00.464) 0:00:04.227 ***** 2026-02-05 02:40:23.140604 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:23.140610 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:23.140616 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:23.140622 | orchestrator | 2026-02-05 02:40:23.140628 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 02:40:23.140633 | orchestrator | Thursday 05 February 2026 02:40:18 +0000 (0:00:00.295) 0:00:04.523 ***** 2026-02-05 02:40:23.140639 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 02:40:23.140646 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 02:40:23.140652 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 02:40:23.140658 | orchestrator | 2026-02-05 02:40:23.140665 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 02:40:23.140671 | orchestrator | Thursday 05 February 2026 02:40:18 +0000 (0:00:00.648) 0:00:05.171 ***** 2026-02-05 02:40:23.140678 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:23.140684 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:23.140691 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:23.140698 | orchestrator | 2026-02-05 02:40:23.140704 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 02:40:23.140709 | orchestrator | Thursday 05 February 2026 02:40:19 +0000 (0:00:00.381) 0:00:05.553 ***** 2026-02-05 02:40:23.140715 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 02:40:23.140720 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 02:40:23.140726 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 02:40:23.140731 | orchestrator | 2026-02-05 02:40:23.140737 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 02:40:23.140743 | orchestrator | Thursday 05 February 2026 02:40:21 +0000 (0:00:02.028) 0:00:07.581 ***** 2026-02-05 02:40:23.140751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 02:40:23.140759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 02:40:23.140765 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 02:40:23.140771 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:23.140777 | orchestrator | 2026-02-05 02:40:23.140804 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 02:40:23.140813 | orchestrator | Thursday 05 February 2026 02:40:21 +0000 (0:00:00.555) 0:00:08.137 ***** 2026-02-05 02:40:23.140823 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 02:40:23.140833 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 02:40:23.140839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 02:40:23.140846 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:23.140850 | orchestrator | 2026-02-05 02:40:23.140855 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 02:40:23.140860 | orchestrator | Thursday 05 February 2026 02:40:22 +0000 (0:00:00.879) 0:00:09.017 ***** 2026-02-05 02:40:23.140872 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:23.140888 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:23.140893 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:23.140898 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:23.140902 | orchestrator | 2026-02-05 02:40:23.140907 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 02:40:23.140911 | orchestrator | Thursday 05 February 2026 02:40:22 +0000 (0:00:00.171) 0:00:09.188 ***** 2026-02-05 02:40:23.140918 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'de37024be869', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 02:40:20.091857', 'end': '2026-02-05 02:40:20.143773', 'delta': '0:00:00.051916', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de37024be869'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 02:40:23.140927 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'df4012ab4a61', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 02:40:20.625771', 'end': '2026-02-05 02:40:20.662985', 'delta': '0:00:00.037214', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['df4012ab4a61'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 02:40:23.140936 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '458f6feaf079', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 02:40:21.164044', 'end': '2026-02-05 02:40:21.198154', 'delta': '0:00:00.034110', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['458f6feaf079'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 02:40:29.147770 | orchestrator | 2026-02-05 02:40:29.147894 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 02:40:29.147915 | orchestrator | Thursday 05 February 2026 02:40:23 +0000 (0:00:00.188) 0:00:09.377 ***** 2026-02-05 02:40:29.147958 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:29.147974 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:29.147989 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:29.148004 | orchestrator | 2026-02-05 02:40:29.148019 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 02:40:29.148057 | orchestrator | Thursday 05 February 2026 02:40:23 +0000 (0:00:00.408) 0:00:09.786 ***** 2026-02-05 02:40:29.148072 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-05 02:40:29.148086 | orchestrator | 2026-02-05 02:40:29.148118 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 02:40:29.148133 | orchestrator | Thursday 05 February 2026 02:40:25 +0000 (0:00:01.639) 0:00:11.425 ***** 2026-02-05 02:40:29.148148 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.148162 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:29.148176 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:29.148190 | orchestrator | 2026-02-05 02:40:29.148204 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 02:40:29.148216 | orchestrator | Thursday 05 February 2026 02:40:25 +0000 (0:00:00.268) 0:00:11.693 ***** 2026-02-05 02:40:29.148230 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.148243 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:29.148257 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:29.148270 | orchestrator | 2026-02-05 02:40:29.148283 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 02:40:29.148297 | orchestrator | Thursday 05 February 2026 02:40:26 +0000 (0:00:00.562) 0:00:12.256 ***** 2026-02-05 02:40:29.148310 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.148323 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:29.148336 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:29.148350 | orchestrator | 2026-02-05 02:40:29.148364 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 02:40:29.148378 | orchestrator | Thursday 05 February 2026 02:40:26 +0000 (0:00:00.258) 0:00:12.515 ***** 2026-02-05 02:40:29.148392 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:29.148405 | orchestrator | 2026-02-05 02:40:29.148417 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 02:40:29.148431 | orchestrator | Thursday 05 February 2026 02:40:26 +0000 (0:00:00.131) 0:00:12.646 ***** 2026-02-05 02:40:29.148444 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.148456 | orchestrator | 2026-02-05 02:40:29.148469 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 02:40:29.148482 | orchestrator | Thursday 05 February 2026 02:40:26 +0000 (0:00:00.232) 0:00:12.879 ***** 2026-02-05 02:40:29.148495 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.148509 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:29.148522 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:29.148534 | orchestrator | 2026-02-05 02:40:29.148547 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 02:40:29.148560 | orchestrator | Thursday 05 February 2026 02:40:26 +0000 (0:00:00.291) 0:00:13.170 ***** 2026-02-05 02:40:29.148571 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.148583 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:29.148595 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:29.148607 | orchestrator | 2026-02-05 02:40:29.148619 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 02:40:29.148632 | orchestrator | Thursday 05 February 2026 02:40:27 +0000 (0:00:00.408) 0:00:13.579 ***** 2026-02-05 02:40:29.148645 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.148659 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:29.148672 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:29.148685 | orchestrator | 2026-02-05 02:40:29.148698 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 02:40:29.148712 | orchestrator | Thursday 05 February 2026 02:40:27 +0000 (0:00:00.297) 0:00:13.877 ***** 2026-02-05 02:40:29.148737 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.148751 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:29.148763 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:29.148778 | orchestrator | 2026-02-05 02:40:29.148791 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 02:40:29.148805 | orchestrator | Thursday 05 February 2026 02:40:27 +0000 (0:00:00.287) 0:00:14.164 ***** 2026-02-05 02:40:29.148818 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.148831 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:29.148845 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:29.148858 | orchestrator | 2026-02-05 02:40:29.148872 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 02:40:29.148886 | orchestrator | Thursday 05 February 2026 02:40:28 +0000 (0:00:00.309) 0:00:14.473 ***** 2026-02-05 02:40:29.148899 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.148912 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:29.148926 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:29.148940 | orchestrator | 2026-02-05 02:40:29.148953 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 02:40:29.148967 | orchestrator | Thursday 05 February 2026 02:40:28 +0000 (0:00:00.415) 0:00:14.888 ***** 2026-02-05 02:40:29.148981 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.148995 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:29.149008 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:29.149022 | orchestrator | 2026-02-05 02:40:29.149058 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 02:40:29.149072 | orchestrator | Thursday 05 February 2026 02:40:28 +0000 (0:00:00.304) 0:00:15.193 ***** 2026-02-05 02:40:29.149115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e', 'dm-uuid-LVM-gjVz64L0xYhHucIQrbSWO4IaXeskE9njVHEBOKPFjChmvGixI0fMAnchfE228jrV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.149142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567', 'dm-uuid-LVM-rm93nYJXJvDmNv1mI2i0aCOQRWUNQlkCoPPr3WLpbHMBKwrxigfqk31Pio1T8A2M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.149160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.149176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.149190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.149214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.149228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.149241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.149279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.149305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.276450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.276572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625', 'dm-uuid-LVM-9Y06a2zVor1lRD1cyPlucPXWC0aPbN2JxLYAdcU08G9AXF4NeOKXZ9V1sHvTv2MQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.276607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VPbbSc-FYsx-oCa5-EK96-LSd2-FMne-gw3pzp', 'scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2', 'scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.276656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c', 'dm-uuid-LVM-5TLZe1Tgo1TKM8GkjUpfN78ieh5w0ANrQNgi2dmi5diYRe7Lgm9DH3wMJKHbVGFu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.276686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-30TRfy-AcTU-PjNY-ZSvI-Ms8S-pTLw-T1Q2CW', 'scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b', 'scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.276708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.276732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff', 'scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.276768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.276790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.276811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.276823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.276843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.470441 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.470590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.470618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.470638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.470690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.470739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K9GKOz-fxxR-Pm8N-aWMy-HniX-e8kz-eif3cf', 'scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a', 'scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.470767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Pz8pQL-5OmI-WkJt-J5Qa-2PBj-Qacj-FgSo8f', 'scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930', 'scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.470799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a', 'scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.470820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.470843 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:29.470864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be', 'dm-uuid-LVM-2cW2aDbCF7Qvd1HDyT5MPDeJBzJFIyWajOrxUSy4sPZH0JqYli0dE22RqjUl99AS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.470884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565', 'dm-uuid-LVM-vN6SqmnZs4OEgki7muUGb3CX2rpgO9JjiNwKDjdU3U6P9o8RLpsOeeot25aaAr4C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.470904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.470935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.662976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.663087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.663117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.663124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.663130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.663136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 02:40:29.663162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.663178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s8rEz7-ppR5-3mX9-9SVK-AT2X-wlWd-qt0ARf', 'scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9', 'scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.663186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j8R0nG-W0YC-WK20-RGGA-JPgY-3scR-ZQIgrc', 'scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49', 'scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.663193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc', 'scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.663200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 02:40:29.663207 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:29.663214 | orchestrator | 2026-02-05 02:40:29.663221 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 02:40:29.663228 | orchestrator | Thursday 05 February 2026 02:40:29 +0000 (0:00:00.521) 0:00:15.714 ***** 2026-02-05 02:40:29.663240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e', 'dm-uuid-LVM-gjVz64L0xYhHucIQrbSWO4IaXeskE9njVHEBOKPFjChmvGixI0fMAnchfE228jrV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698540 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567', 'dm-uuid-LVM-rm93nYJXJvDmNv1mI2i0aCOQRWUNQlkCoPPr3WLpbHMBKwrxigfqk31Pio1T8A2M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698631 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698641 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698651 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698705 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698716 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625', 'dm-uuid-LVM-9Y06a2zVor1lRD1cyPlucPXWC0aPbN2JxLYAdcU08G9AXF4NeOKXZ9V1sHvTv2MQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698725 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698734 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c', 'dm-uuid-LVM-5TLZe1Tgo1TKM8GkjUpfN78ieh5w0ANrQNgi2dmi5diYRe7Lgm9DH3wMJKHbVGFu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698743 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698753 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.698778 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.726182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.726281 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VPbbSc-FYsx-oCa5-EK96-LSd2-FMne-gw3pzp', 'scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2', 'scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.726312 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.726365 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-30TRfy-AcTU-PjNY-ZSvI-Ms8S-pTLw-T1Q2CW', 'scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b', 'scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.726378 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.726392 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff', 'scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.726405 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.726417 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.726448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.825501 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.825624 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.825643 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:29.825688 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.825762 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K9GKOz-fxxR-Pm8N-aWMy-HniX-e8kz-eif3cf', 'scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a', 'scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.825778 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Pz8pQL-5OmI-WkJt-J5Qa-2PBj-Qacj-FgSo8f', 'scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930', 'scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.825790 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a', 'scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.825803 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.825830 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:29.825842 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be', 'dm-uuid-LVM-2cW2aDbCF7Qvd1HDyT5MPDeJBzJFIyWajOrxUSy4sPZH0JqYli0dE22RqjUl99AS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.825867 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565', 'dm-uuid-LVM-vN6SqmnZs4OEgki7muUGb3CX2rpgO9JjiNwKDjdU3U6P9o8RLpsOeeot25aaAr4C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.928442 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.928536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.928548 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.928555 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.928584 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.928606 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.928632 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.928640 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.928650 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.928670 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s8rEz7-ppR5-3mX9-9SVK-AT2X-wlWd-qt0ARf', 'scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9', 'scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:29.928687 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j8R0nG-W0YC-WK20-RGGA-JPgY-3scR-ZQIgrc', 'scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49', 'scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:38.810613 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc', 'scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:38.810747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-01-22-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 02:40:38.810793 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:38.810809 | orchestrator | 2026-02-05 02:40:38.810821 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 02:40:38.810833 | orchestrator | Thursday 05 February 2026 02:40:30 +0000 (0:00:00.568) 0:00:16.282 ***** 2026-02-05 02:40:38.810844 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:38.810856 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:38.810867 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:38.810877 | orchestrator | 2026-02-05 02:40:38.810888 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 02:40:38.810899 | orchestrator | Thursday 05 February 2026 02:40:30 +0000 (0:00:00.800) 0:00:17.082 ***** 2026-02-05 02:40:38.810910 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:38.810920 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:38.810931 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:38.810942 | orchestrator | 2026-02-05 02:40:38.810953 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 02:40:38.810963 | orchestrator | Thursday 05 February 2026 02:40:31 +0000 (0:00:00.263) 0:00:17.345 ***** 2026-02-05 02:40:38.810974 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:38.810985 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:38.810996 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:38.811036 | orchestrator | 2026-02-05 02:40:38.811064 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 02:40:38.811075 | orchestrator | Thursday 05 February 2026 02:40:31 +0000 (0:00:00.589) 0:00:17.935 ***** 2026-02-05 02:40:38.811086 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:38.811097 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:38.811108 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:38.811119 | orchestrator | 2026-02-05 02:40:38.811129 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 02:40:38.811140 | orchestrator | Thursday 05 February 2026 02:40:31 +0000 (0:00:00.245) 0:00:18.181 ***** 2026-02-05 02:40:38.811151 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:38.811162 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:38.811173 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:38.811183 | orchestrator | 2026-02-05 02:40:38.811194 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 02:40:38.811205 | orchestrator | Thursday 05 February 2026 02:40:32 +0000 (0:00:00.556) 0:00:18.737 ***** 2026-02-05 02:40:38.811215 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:38.811226 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:38.811237 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:38.811247 | orchestrator | 2026-02-05 02:40:38.811258 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 02:40:38.811269 | orchestrator | Thursday 05 February 2026 02:40:32 +0000 (0:00:00.309) 0:00:19.046 ***** 2026-02-05 02:40:38.811280 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-05 02:40:38.811291 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-05 02:40:38.811302 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-05 02:40:38.811312 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-05 02:40:38.811323 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-05 02:40:38.811333 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-05 02:40:38.811344 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-05 02:40:38.811364 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-05 02:40:38.811375 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-05 02:40:38.811386 | orchestrator | 2026-02-05 02:40:38.811397 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 02:40:38.811408 | orchestrator | Thursday 05 February 2026 02:40:33 +0000 (0:00:00.933) 0:00:19.980 ***** 2026-02-05 02:40:38.811436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 02:40:38.811449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 02:40:38.811460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 02:40:38.811471 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:38.811482 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 02:40:38.811492 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 02:40:38.811503 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 02:40:38.811514 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:38.811524 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 02:40:38.811535 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 02:40:38.811545 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 02:40:38.811556 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:38.811567 | orchestrator | 2026-02-05 02:40:38.811578 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 02:40:38.811589 | orchestrator | Thursday 05 February 2026 02:40:34 +0000 (0:00:00.322) 0:00:20.302 ***** 2026-02-05 02:40:38.811600 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:40:38.811611 | orchestrator | 2026-02-05 02:40:38.811623 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 02:40:38.811635 | orchestrator | Thursday 05 February 2026 02:40:34 +0000 (0:00:00.607) 0:00:20.910 ***** 2026-02-05 02:40:38.811645 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:38.811656 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:38.811667 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:38.811678 | orchestrator | 2026-02-05 02:40:38.811688 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 02:40:38.811699 | orchestrator | Thursday 05 February 2026 02:40:34 +0000 (0:00:00.311) 0:00:21.222 ***** 2026-02-05 02:40:38.811710 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:38.811721 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:38.811731 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:38.811742 | orchestrator | 2026-02-05 02:40:38.811753 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 02:40:38.811763 | orchestrator | Thursday 05 February 2026 02:40:35 +0000 (0:00:00.270) 0:00:21.493 ***** 2026-02-05 02:40:38.811774 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:38.811785 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:40:38.811795 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:40:38.811806 | orchestrator | 2026-02-05 02:40:38.811817 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 02:40:38.811828 | orchestrator | Thursday 05 February 2026 02:40:35 +0000 (0:00:00.415) 0:00:21.909 ***** 2026-02-05 02:40:38.811838 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:38.811849 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:38.811860 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:38.811871 | orchestrator | 2026-02-05 02:40:38.811882 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 02:40:38.811892 | orchestrator | Thursday 05 February 2026 02:40:36 +0000 (0:00:00.360) 0:00:22.269 ***** 2026-02-05 02:40:38.811903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:40:38.811929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:40:38.811940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:40:38.811956 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:38.811967 | orchestrator | 2026-02-05 02:40:38.811978 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 02:40:38.811989 | orchestrator | Thursday 05 February 2026 02:40:36 +0000 (0:00:00.340) 0:00:22.610 ***** 2026-02-05 02:40:38.812057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:40:38.812073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:40:38.812084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:40:38.812095 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:38.812105 | orchestrator | 2026-02-05 02:40:38.812116 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 02:40:38.812127 | orchestrator | Thursday 05 February 2026 02:40:36 +0000 (0:00:00.365) 0:00:22.976 ***** 2026-02-05 02:40:38.812137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 02:40:38.812148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 02:40:38.812159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 02:40:38.812169 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:40:38.812180 | orchestrator | 2026-02-05 02:40:38.812190 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 02:40:38.812201 | orchestrator | Thursday 05 February 2026 02:40:37 +0000 (0:00:00.370) 0:00:23.347 ***** 2026-02-05 02:40:38.812212 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:40:38.812222 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:40:38.812233 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:40:38.812243 | orchestrator | 2026-02-05 02:40:38.812254 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 02:40:38.812265 | orchestrator | Thursday 05 February 2026 02:40:37 +0000 (0:00:00.300) 0:00:23.648 ***** 2026-02-05 02:40:38.812276 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 02:40:38.812287 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 02:40:38.812297 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 02:40:38.812308 | orchestrator | 2026-02-05 02:40:38.812318 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 02:40:38.812329 | orchestrator | Thursday 05 February 2026 02:40:38 +0000 (0:00:00.638) 0:00:24.286 ***** 2026-02-05 02:40:38.812340 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 02:40:38.812359 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 02:42:20.548383 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 02:42:20.548657 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 02:42:20.548678 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 02:42:20.548691 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 02:42:20.548703 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 02:42:20.548714 | orchestrator | 2026-02-05 02:42:20.548726 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 02:42:20.548738 | orchestrator | Thursday 05 February 2026 02:40:38 +0000 (0:00:00.764) 0:00:25.051 ***** 2026-02-05 02:42:20.548749 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 02:42:20.548792 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 02:42:20.548805 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 02:42:20.548816 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 02:42:20.548854 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 02:42:20.548866 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 02:42:20.548877 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 02:42:20.548890 | orchestrator | 2026-02-05 02:42:20.548903 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-05 02:42:20.548916 | orchestrator | Thursday 05 February 2026 02:40:40 +0000 (0:00:01.469) 0:00:26.521 ***** 2026-02-05 02:42:20.548928 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:42:20.548942 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:42:20.548955 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-05 02:42:20.548968 | orchestrator | 2026-02-05 02:42:20.548981 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-05 02:42:20.548994 | orchestrator | Thursday 05 February 2026 02:40:40 +0000 (0:00:00.456) 0:00:26.978 ***** 2026-02-05 02:42:20.549010 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 02:42:20.549025 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 02:42:20.549055 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 02:42:20.549068 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 02:42:20.549081 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 02:42:20.549094 | orchestrator | 2026-02-05 02:42:20.549108 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-05 02:42:20.549122 | orchestrator | Thursday 05 February 2026 02:41:26 +0000 (0:00:45.767) 0:01:12.745 ***** 2026-02-05 02:42:20.549134 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549147 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549160 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549185 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549198 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549210 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-05 02:42:20.549223 | orchestrator | 2026-02-05 02:42:20.549235 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-05 02:42:20.549248 | orchestrator | Thursday 05 February 2026 02:41:51 +0000 (0:00:24.841) 0:01:37.587 ***** 2026-02-05 02:42:20.549281 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549303 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549315 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549328 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549341 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549353 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549366 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 02:42:20.549378 | orchestrator | 2026-02-05 02:42:20.549391 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-05 02:42:20.549404 | orchestrator | Thursday 05 February 2026 02:42:03 +0000 (0:00:11.965) 0:01:49.552 ***** 2026-02-05 02:42:20.549417 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549430 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 02:42:20.549443 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 02:42:20.549456 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549468 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 02:42:20.549481 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 02:42:20.549493 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549506 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 02:42:20.549519 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 02:42:20.549532 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549545 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 02:42:20.549558 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 02:42:20.549571 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549584 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 02:42:20.549597 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 02:42:20.549611 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 02:42:20.549623 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 02:42:20.549636 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 02:42:20.549649 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-05 02:42:20.549662 | orchestrator | 2026-02-05 02:42:20.549675 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:42:20.549694 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-05 02:42:20.549709 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-05 02:42:20.549722 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-05 02:42:20.549735 | orchestrator | 2026-02-05 02:42:20.549748 | orchestrator | 2026-02-05 02:42:20.549826 | orchestrator | 2026-02-05 02:42:20.549841 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:42:20.549854 | orchestrator | Thursday 05 February 2026 02:42:20 +0000 (0:00:17.220) 0:02:06.773 ***** 2026-02-05 02:42:20.549866 | orchestrator | =============================================================================== 2026-02-05 02:42:20.549887 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.77s 2026-02-05 02:42:20.549900 | orchestrator | generate keys ---------------------------------------------------------- 24.84s 2026-02-05 02:42:20.549912 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.22s 2026-02-05 02:42:20.549925 | orchestrator | get keys from monitors ------------------------------------------------- 11.97s 2026-02-05 02:42:20.549938 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.03s 2026-02-05 02:42:20.549950 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.64s 2026-02-05 02:42:20.549963 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.47s 2026-02-05 02:42:20.549977 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.93s 2026-02-05 02:42:20.549997 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.88s 2026-02-05 02:42:20.550135 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.80s 2026-02-05 02:42:20.550167 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.76s 2026-02-05 02:42:20.550186 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.73s 2026-02-05 02:42:20.550206 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.66s 2026-02-05 02:42:20.550240 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2026-02-05 02:42:20.789831 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.64s 2026-02-05 02:42:20.789943 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.63s 2026-02-05 02:42:20.789965 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.61s 2026-02-05 02:42:20.789980 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.59s 2026-02-05 02:42:20.789992 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.57s 2026-02-05 02:42:20.790004 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.56s 2026-02-05 02:42:22.823346 | orchestrator | 2026-02-05 02:42:22 | INFO  | Task 5948c33b-274f-413c-be2a-9a53b613449c (copy-ceph-keys) was prepared for execution. 2026-02-05 02:42:22.823427 | orchestrator | 2026-02-05 02:42:22 | INFO  | It takes a moment until task 5948c33b-274f-413c-be2a-9a53b613449c (copy-ceph-keys) has been started and output is visible here. 2026-02-05 02:42:58.634091 | orchestrator | 2026-02-05 02:42:58.634219 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-05 02:42:58.634240 | orchestrator | 2026-02-05 02:42:58.634254 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-05 02:42:58.634266 | orchestrator | Thursday 05 February 2026 02:42:27 +0000 (0:00:00.172) 0:00:00.172 ***** 2026-02-05 02:42:58.634278 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-05 02:42:58.634292 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 02:42:58.634305 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 02:42:58.634316 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 02:42:58.634329 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 02:42:58.634340 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-05 02:42:58.634351 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-05 02:42:58.634362 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-05 02:42:58.634403 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-05 02:42:58.634415 | orchestrator | 2026-02-05 02:42:58.634426 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-05 02:42:58.634437 | orchestrator | Thursday 05 February 2026 02:42:31 +0000 (0:00:04.752) 0:00:04.924 ***** 2026-02-05 02:42:58.634448 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-05 02:42:58.634474 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 02:42:58.634486 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 02:42:58.634497 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 02:42:58.634508 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 02:42:58.634520 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-05 02:42:58.634535 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-05 02:42:58.634547 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-05 02:42:58.634559 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-05 02:42:58.634572 | orchestrator | 2026-02-05 02:42:58.634584 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-05 02:42:58.634598 | orchestrator | Thursday 05 February 2026 02:42:36 +0000 (0:00:04.528) 0:00:09.452 ***** 2026-02-05 02:42:58.634611 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 02:42:58.634622 | orchestrator | 2026-02-05 02:42:58.634635 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-05 02:42:58.634649 | orchestrator | Thursday 05 February 2026 02:42:37 +0000 (0:00:00.873) 0:00:10.326 ***** 2026-02-05 02:42:58.634662 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-05 02:42:58.634675 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-05 02:42:58.634721 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-05 02:42:58.634734 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 02:42:58.634747 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-05 02:42:58.634760 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-05 02:42:58.634773 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-05 02:42:58.634786 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-05 02:42:58.634799 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-05 02:42:58.634812 | orchestrator | 2026-02-05 02:42:58.634825 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-05 02:42:58.634839 | orchestrator | Thursday 05 February 2026 02:42:49 +0000 (0:00:12.047) 0:00:22.374 ***** 2026-02-05 02:42:58.634852 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-05 02:42:58.634865 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-05 02:42:58.634879 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-05 02:42:58.634892 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-05 02:42:58.634928 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-05 02:42:58.634955 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-05 02:42:58.634968 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-05 02:42:58.634981 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-05 02:42:58.634995 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-05 02:42:58.635009 | orchestrator | 2026-02-05 02:42:58.635022 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-05 02:42:58.635035 | orchestrator | Thursday 05 February 2026 02:42:52 +0000 (0:00:02.774) 0:00:25.149 ***** 2026-02-05 02:42:58.635047 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-05 02:42:58.635060 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-05 02:42:58.635072 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-05 02:42:58.635084 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 02:42:58.635096 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-05 02:42:58.635109 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-05 02:42:58.635121 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-05 02:42:58.635133 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-05 02:42:58.635145 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-05 02:42:58.635158 | orchestrator | 2026-02-05 02:42:58.635171 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:42:58.635192 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:42:58.635205 | orchestrator | 2026-02-05 02:42:58.635217 | orchestrator | 2026-02-05 02:42:58.635230 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:42:58.635242 | orchestrator | Thursday 05 February 2026 02:42:58 +0000 (0:00:06.387) 0:00:31.536 ***** 2026-02-05 02:42:58.635253 | orchestrator | =============================================================================== 2026-02-05 02:42:58.635264 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.05s 2026-02-05 02:42:58.635276 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.39s 2026-02-05 02:42:58.635288 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.75s 2026-02-05 02:42:58.635300 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.53s 2026-02-05 02:42:58.635311 | orchestrator | Check if target directories exist --------------------------------------- 2.77s 2026-02-05 02:42:58.635323 | orchestrator | Create share directory -------------------------------------------------- 0.87s 2026-02-05 02:43:10.648895 | orchestrator | 2026-02-05 02:43:10 | INFO  | Task 8033910f-6f68-4c12-8100-2933d45922e5 (cephclient) was prepared for execution. 2026-02-05 02:43:10.649013 | orchestrator | 2026-02-05 02:43:10 | INFO  | It takes a moment until task 8033910f-6f68-4c12-8100-2933d45922e5 (cephclient) has been started and output is visible here. 2026-02-05 02:44:12.491870 | orchestrator | 2026-02-05 02:44:12.491974 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-05 02:44:12.491988 | orchestrator | 2026-02-05 02:44:12.491997 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-05 02:44:12.492007 | orchestrator | Thursday 05 February 2026 02:43:14 +0000 (0:00:00.230) 0:00:00.230 ***** 2026-02-05 02:44:12.492016 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-05 02:44:12.492027 | orchestrator | 2026-02-05 02:44:12.492058 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-05 02:44:12.492068 | orchestrator | Thursday 05 February 2026 02:43:15 +0000 (0:00:00.271) 0:00:00.501 ***** 2026-02-05 02:44:12.492077 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-05 02:44:12.492086 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-05 02:44:12.492095 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-05 02:44:12.492104 | orchestrator | 2026-02-05 02:44:12.492112 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-05 02:44:12.492121 | orchestrator | Thursday 05 February 2026 02:43:16 +0000 (0:00:01.268) 0:00:01.769 ***** 2026-02-05 02:44:12.492130 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-05 02:44:12.492138 | orchestrator | 2026-02-05 02:44:12.492147 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-05 02:44:12.492156 | orchestrator | Thursday 05 February 2026 02:43:18 +0000 (0:00:01.531) 0:00:03.301 ***** 2026-02-05 02:44:12.492164 | orchestrator | changed: [testbed-manager] 2026-02-05 02:44:12.492173 | orchestrator | 2026-02-05 02:44:12.492182 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-05 02:44:12.492190 | orchestrator | Thursday 05 February 2026 02:43:18 +0000 (0:00:00.886) 0:00:04.188 ***** 2026-02-05 02:44:12.492199 | orchestrator | changed: [testbed-manager] 2026-02-05 02:44:12.492207 | orchestrator | 2026-02-05 02:44:12.492216 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-05 02:44:12.492224 | orchestrator | Thursday 05 February 2026 02:43:19 +0000 (0:00:00.917) 0:00:05.105 ***** 2026-02-05 02:44:12.492233 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-05 02:44:12.492241 | orchestrator | ok: [testbed-manager] 2026-02-05 02:44:12.492250 | orchestrator | 2026-02-05 02:44:12.492258 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-05 02:44:12.492267 | orchestrator | Thursday 05 February 2026 02:44:02 +0000 (0:00:42.452) 0:00:47.558 ***** 2026-02-05 02:44:12.492275 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-05 02:44:12.492284 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-05 02:44:12.492292 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-05 02:44:12.492301 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-05 02:44:12.492309 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-05 02:44:12.492318 | orchestrator | 2026-02-05 02:44:12.492327 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-05 02:44:12.492335 | orchestrator | Thursday 05 February 2026 02:44:06 +0000 (0:00:04.161) 0:00:51.719 ***** 2026-02-05 02:44:12.492344 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-05 02:44:12.492352 | orchestrator | 2026-02-05 02:44:12.492361 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-05 02:44:12.492369 | orchestrator | Thursday 05 February 2026 02:44:06 +0000 (0:00:00.471) 0:00:52.190 ***** 2026-02-05 02:44:12.492378 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:44:12.492387 | orchestrator | 2026-02-05 02:44:12.492395 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-05 02:44:12.492403 | orchestrator | Thursday 05 February 2026 02:44:07 +0000 (0:00:00.139) 0:00:52.330 ***** 2026-02-05 02:44:12.492412 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:44:12.492422 | orchestrator | 2026-02-05 02:44:12.492432 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-05 02:44:12.492441 | orchestrator | Thursday 05 February 2026 02:44:07 +0000 (0:00:00.513) 0:00:52.843 ***** 2026-02-05 02:44:12.492465 | orchestrator | changed: [testbed-manager] 2026-02-05 02:44:12.492476 | orchestrator | 2026-02-05 02:44:12.492486 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-05 02:44:12.492507 | orchestrator | Thursday 05 February 2026 02:44:09 +0000 (0:00:01.719) 0:00:54.563 ***** 2026-02-05 02:44:12.492517 | orchestrator | changed: [testbed-manager] 2026-02-05 02:44:12.492527 | orchestrator | 2026-02-05 02:44:12.492537 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-05 02:44:12.492595 | orchestrator | Thursday 05 February 2026 02:44:10 +0000 (0:00:00.722) 0:00:55.285 ***** 2026-02-05 02:44:12.492605 | orchestrator | changed: [testbed-manager] 2026-02-05 02:44:12.492629 | orchestrator | 2026-02-05 02:44:12.492648 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-05 02:44:12.492657 | orchestrator | Thursday 05 February 2026 02:44:10 +0000 (0:00:00.640) 0:00:55.925 ***** 2026-02-05 02:44:12.492666 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-05 02:44:12.492675 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-05 02:44:12.492684 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-05 02:44:12.492693 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-05 02:44:12.492709 | orchestrator | 2026-02-05 02:44:12.492726 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:44:12.492742 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 02:44:12.492759 | orchestrator | 2026-02-05 02:44:12.492774 | orchestrator | 2026-02-05 02:44:12.492810 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:44:12.492825 | orchestrator | Thursday 05 February 2026 02:44:12 +0000 (0:00:01.518) 0:00:57.444 ***** 2026-02-05 02:44:12.492841 | orchestrator | =============================================================================== 2026-02-05 02:44:12.492855 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.45s 2026-02-05 02:44:12.492869 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.16s 2026-02-05 02:44:12.492883 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.72s 2026-02-05 02:44:12.492896 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.53s 2026-02-05 02:44:12.492911 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.52s 2026-02-05 02:44:12.492925 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.27s 2026-02-05 02:44:12.492941 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.92s 2026-02-05 02:44:12.492956 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.89s 2026-02-05 02:44:12.492970 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.72s 2026-02-05 02:44:12.492986 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2026-02-05 02:44:12.493000 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.51s 2026-02-05 02:44:12.493015 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2026-02-05 02:44:12.493024 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.27s 2026-02-05 02:44:12.493033 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-02-05 02:44:14.860965 | orchestrator | 2026-02-05 02:44:14 | INFO  | Task 194a2eb5-f251-4e22-b15b-a08cc973f829 (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-05 02:44:14.861065 | orchestrator | 2026-02-05 02:44:14 | INFO  | It takes a moment until task 194a2eb5-f251-4e22-b15b-a08cc973f829 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-05 02:45:34.398602 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 02:45:34.398733 | orchestrator | 2.16.14 2026-02-05 02:45:34.398755 | orchestrator | 2026-02-05 02:45:34.398772 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-05 02:45:34.398789 | orchestrator | 2026-02-05 02:45:34.398805 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-05 02:45:34.398851 | orchestrator | Thursday 05 February 2026 02:44:18 +0000 (0:00:00.251) 0:00:00.251 ***** 2026-02-05 02:45:34.398868 | orchestrator | changed: [testbed-manager] 2026-02-05 02:45:34.398886 | orchestrator | 2026-02-05 02:45:34.398902 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-05 02:45:34.398917 | orchestrator | Thursday 05 February 2026 02:44:20 +0000 (0:00:01.615) 0:00:01.866 ***** 2026-02-05 02:45:34.398933 | orchestrator | changed: [testbed-manager] 2026-02-05 02:45:34.398949 | orchestrator | 2026-02-05 02:45:34.398964 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-05 02:45:34.398980 | orchestrator | Thursday 05 February 2026 02:44:21 +0000 (0:00:00.943) 0:00:02.809 ***** 2026-02-05 02:45:34.398995 | orchestrator | changed: [testbed-manager] 2026-02-05 02:45:34.399011 | orchestrator | 2026-02-05 02:45:34.399026 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-05 02:45:34.399042 | orchestrator | Thursday 05 February 2026 02:44:22 +0000 (0:00:00.996) 0:00:03.806 ***** 2026-02-05 02:45:34.399057 | orchestrator | changed: [testbed-manager] 2026-02-05 02:45:34.399073 | orchestrator | 2026-02-05 02:45:34.399088 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-05 02:45:34.399104 | orchestrator | Thursday 05 February 2026 02:44:23 +0000 (0:00:01.035) 0:00:04.841 ***** 2026-02-05 02:45:34.399119 | orchestrator | changed: [testbed-manager] 2026-02-05 02:45:34.399135 | orchestrator | 2026-02-05 02:45:34.399150 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-05 02:45:34.399166 | orchestrator | Thursday 05 February 2026 02:44:24 +0000 (0:00:00.951) 0:00:05.793 ***** 2026-02-05 02:45:34.399199 | orchestrator | changed: [testbed-manager] 2026-02-05 02:45:34.399214 | orchestrator | 2026-02-05 02:45:34.399230 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-05 02:45:34.399246 | orchestrator | Thursday 05 February 2026 02:44:25 +0000 (0:00:00.984) 0:00:06.777 ***** 2026-02-05 02:45:34.399261 | orchestrator | changed: [testbed-manager] 2026-02-05 02:45:34.399277 | orchestrator | 2026-02-05 02:45:34.399292 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-05 02:45:34.399308 | orchestrator | Thursday 05 February 2026 02:44:27 +0000 (0:00:02.083) 0:00:08.861 ***** 2026-02-05 02:45:34.399324 | orchestrator | changed: [testbed-manager] 2026-02-05 02:45:34.399339 | orchestrator | 2026-02-05 02:45:34.399354 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-05 02:45:34.399370 | orchestrator | Thursday 05 February 2026 02:44:28 +0000 (0:00:01.185) 0:00:10.046 ***** 2026-02-05 02:45:34.399385 | orchestrator | changed: [testbed-manager] 2026-02-05 02:45:34.399401 | orchestrator | 2026-02-05 02:45:34.399440 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-05 02:45:34.399458 | orchestrator | Thursday 05 February 2026 02:45:09 +0000 (0:00:41.011) 0:00:51.058 ***** 2026-02-05 02:45:34.399473 | orchestrator | skipping: [testbed-manager] 2026-02-05 02:45:34.399489 | orchestrator | 2026-02-05 02:45:34.399504 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-05 02:45:34.399520 | orchestrator | 2026-02-05 02:45:34.399535 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-05 02:45:34.399551 | orchestrator | Thursday 05 February 2026 02:45:09 +0000 (0:00:00.165) 0:00:51.223 ***** 2026-02-05 02:45:34.399566 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:45:34.399582 | orchestrator | 2026-02-05 02:45:34.399597 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-05 02:45:34.399612 | orchestrator | 2026-02-05 02:45:34.399628 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-05 02:45:34.399643 | orchestrator | Thursday 05 February 2026 02:45:21 +0000 (0:00:11.807) 0:01:03.030 ***** 2026-02-05 02:45:34.399659 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:45:34.399674 | orchestrator | 2026-02-05 02:45:34.399690 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-05 02:45:34.399717 | orchestrator | 2026-02-05 02:45:34.399732 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-05 02:45:34.399748 | orchestrator | Thursday 05 February 2026 02:45:32 +0000 (0:00:11.196) 0:01:14.227 ***** 2026-02-05 02:45:34.399764 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:45:34.399780 | orchestrator | 2026-02-05 02:45:34.399795 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:45:34.399813 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 02:45:34.399829 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:45:34.399846 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:45:34.399862 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 02:45:34.399877 | orchestrator | 2026-02-05 02:45:34.399893 | orchestrator | 2026-02-05 02:45:34.399908 | orchestrator | 2026-02-05 02:45:34.399924 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:45:34.399940 | orchestrator | Thursday 05 February 2026 02:45:34 +0000 (0:00:01.252) 0:01:15.479 ***** 2026-02-05 02:45:34.399955 | orchestrator | =============================================================================== 2026-02-05 02:45:34.399971 | orchestrator | Create admin user ------------------------------------------------------ 41.01s 2026-02-05 02:45:34.400007 | orchestrator | Restart ceph manager service ------------------------------------------- 24.26s 2026-02-05 02:45:34.400024 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.08s 2026-02-05 02:45:34.400039 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.62s 2026-02-05 02:45:34.400055 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2026-02-05 02:45:34.400070 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.04s 2026-02-05 02:45:34.400086 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.00s 2026-02-05 02:45:34.400101 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.98s 2026-02-05 02:45:34.400117 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.95s 2026-02-05 02:45:34.400132 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.94s 2026-02-05 02:45:34.400148 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-02-05 02:45:34.594989 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-05 02:45:36.516726 | orchestrator | 2026-02-05 02:45:36 | INFO  | Task 424d25d8-6a16-47c6-97e4-4041b9b71f32 (keystone) was prepared for execution. 2026-02-05 02:45:36.516827 | orchestrator | 2026-02-05 02:45:36 | INFO  | It takes a moment until task 424d25d8-6a16-47c6-97e4-4041b9b71f32 (keystone) has been started and output is visible here. 2026-02-05 02:45:43.460158 | orchestrator | 2026-02-05 02:45:43.460309 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 02:45:43.460338 | orchestrator | 2026-02-05 02:45:43.460965 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 02:45:43.461006 | orchestrator | Thursday 05 February 2026 02:45:40 +0000 (0:00:00.262) 0:00:00.263 ***** 2026-02-05 02:45:43.461018 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:45:43.461031 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:45:43.461064 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:45:43.461076 | orchestrator | 2026-02-05 02:45:43.461088 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 02:45:43.461099 | orchestrator | Thursday 05 February 2026 02:45:40 +0000 (0:00:00.320) 0:00:00.583 ***** 2026-02-05 02:45:43.461132 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-05 02:45:43.461144 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-05 02:45:43.461155 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-05 02:45:43.461166 | orchestrator | 2026-02-05 02:45:43.461177 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-05 02:45:43.461188 | orchestrator | 2026-02-05 02:45:43.461199 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 02:45:43.461210 | orchestrator | Thursday 05 February 2026 02:45:41 +0000 (0:00:00.421) 0:00:01.005 ***** 2026-02-05 02:45:43.461221 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:45:43.461233 | orchestrator | 2026-02-05 02:45:43.461244 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-05 02:45:43.461255 | orchestrator | Thursday 05 February 2026 02:45:41 +0000 (0:00:00.569) 0:00:01.574 ***** 2026-02-05 02:45:43.461271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:45:43.461287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:45:43.461327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:45:43.461350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 02:45:43.461362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 02:45:43.461374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 02:45:43.461386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:45:43.461397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:45:43.461433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:45:43.461452 | orchestrator | 2026-02-05 02:45:43.461464 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-05 02:45:43.461482 | orchestrator | Thursday 05 February 2026 02:45:43 +0000 (0:00:01.459) 0:00:03.034 ***** 2026-02-05 02:45:49.088277 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:45:49.088449 | orchestrator | 2026-02-05 02:45:49.088483 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-05 02:45:49.088523 | orchestrator | Thursday 05 February 2026 02:45:43 +0000 (0:00:00.292) 0:00:03.326 ***** 2026-02-05 02:45:49.088543 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:45:49.088561 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:45:49.088579 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:45:49.088597 | orchestrator | 2026-02-05 02:45:49.088617 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-05 02:45:49.088637 | orchestrator | Thursday 05 February 2026 02:45:44 +0000 (0:00:00.314) 0:00:03.640 ***** 2026-02-05 02:45:49.088656 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 02:45:49.088675 | orchestrator | 2026-02-05 02:45:49.088695 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 02:45:49.088716 | orchestrator | Thursday 05 February 2026 02:45:44 +0000 (0:00:00.819) 0:00:04.459 ***** 2026-02-05 02:45:49.088735 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:45:49.088754 | orchestrator | 2026-02-05 02:45:49.088771 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-05 02:45:49.088790 | orchestrator | Thursday 05 February 2026 02:45:45 +0000 (0:00:00.552) 0:00:05.012 ***** 2026-02-05 02:45:49.088814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:45:49.088840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:45:49.088861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:45:49.088940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 02:45:49.088967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 02:45:49.088987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 02:45:49.089006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:45:49.089025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:45:49.089065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:45:49.089086 | orchestrator | 2026-02-05 02:45:49.089103 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-05 02:45:49.089123 | orchestrator | Thursday 05 February 2026 02:45:48 +0000 (0:00:03.085) 0:00:08.097 ***** 2026-02-05 02:45:49.089158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 02:45:49.880060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:45:49.880228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:45:49.880254 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:45:49.880269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 02:45:49.880298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:45:49.880313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:45:49.880324 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:45:49.880352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 02:45:49.880363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:45:49.880374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:45:49.880390 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:45:49.880435 | orchestrator | 2026-02-05 02:45:49.880446 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-05 02:45:49.880457 | orchestrator | Thursday 05 February 2026 02:45:49 +0000 (0:00:00.573) 0:00:08.670 ***** 2026-02-05 02:45:49.880467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 02:45:49.880483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:45:49.880501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:45:53.096819 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:45:53.096932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 02:45:53.096965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:45:53.097018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:45:53.097039 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:45:53.097075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 02:45:53.097098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:45:53.097142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:45:53.097162 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:45:53.097181 | orchestrator | 2026-02-05 02:45:53.097202 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-05 02:45:53.097223 | orchestrator | Thursday 05 February 2026 02:45:49 +0000 (0:00:00.788) 0:00:09.458 ***** 2026-02-05 02:45:53.097238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:45:53.097262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:45:53.097282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:45:53.097305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 02:45:57.612361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 02:45:57.612530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 02:45:57.612548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:45:57.612561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:45:57.612586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:45:57.612599 | orchestrator | 2026-02-05 02:45:57.612612 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-05 02:45:57.612633 | orchestrator | Thursday 05 February 2026 02:45:53 +0000 (0:00:03.214) 0:00:12.673 ***** 2026-02-05 02:45:57.612678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:45:57.612704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:45:57.612740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:45:57.612763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:45:57.612785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:45:57.612806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:46:00.895076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:46:00.895185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:46:00.895199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:46:00.895211 | orchestrator | 2026-02-05 02:46:00.895223 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-05 02:46:00.895234 | orchestrator | Thursday 05 February 2026 02:45:57 +0000 (0:00:04.513) 0:00:17.187 ***** 2026-02-05 02:46:00.895244 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:46:00.895255 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:46:00.895265 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:46:00.895275 | orchestrator | 2026-02-05 02:46:00.895285 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-05 02:46:00.895295 | orchestrator | Thursday 05 February 2026 02:45:58 +0000 (0:00:01.295) 0:00:18.483 ***** 2026-02-05 02:46:00.895305 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:46:00.895316 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:46:00.895326 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:46:00.895336 | orchestrator | 2026-02-05 02:46:00.895346 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-05 02:46:00.895356 | orchestrator | Thursday 05 February 2026 02:45:59 +0000 (0:00:00.561) 0:00:19.045 ***** 2026-02-05 02:46:00.895366 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:46:00.895376 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:46:00.895413 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:46:00.895423 | orchestrator | 2026-02-05 02:46:00.895445 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-05 02:46:00.895455 | orchestrator | Thursday 05 February 2026 02:45:59 +0000 (0:00:00.530) 0:00:19.576 ***** 2026-02-05 02:46:00.895465 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:46:00.895474 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:46:00.895484 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:46:00.895494 | orchestrator | 2026-02-05 02:46:00.895504 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-05 02:46:00.895514 | orchestrator | Thursday 05 February 2026 02:46:00 +0000 (0:00:00.307) 0:00:19.883 ***** 2026-02-05 02:46:00.895542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 02:46:00.895563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:46:00.895575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:46:00.895586 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:46:00.895597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 02:46:00.895675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:46:00.895689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:46:00.895753 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:46:00.895778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 02:46:19.241946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 02:46:19.242097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 02:46:19.242113 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:46:19.242124 | orchestrator | 2026-02-05 02:46:19.242135 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 02:46:19.242145 | orchestrator | Thursday 05 February 2026 02:46:00 +0000 (0:00:00.585) 0:00:20.469 ***** 2026-02-05 02:46:19.242154 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:46:19.242163 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:46:19.242171 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:46:19.242179 | orchestrator | 2026-02-05 02:46:19.242188 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-05 02:46:19.242197 | orchestrator | Thursday 05 February 2026 02:46:01 +0000 (0:00:00.318) 0:00:20.787 ***** 2026-02-05 02:46:19.242206 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-05 02:46:19.242216 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-05 02:46:19.242225 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-05 02:46:19.242254 | orchestrator | 2026-02-05 02:46:19.242263 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-05 02:46:19.242286 | orchestrator | Thursday 05 February 2026 02:46:02 +0000 (0:00:01.788) 0:00:22.576 ***** 2026-02-05 02:46:19.242294 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 02:46:19.242302 | orchestrator | 2026-02-05 02:46:19.242310 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-05 02:46:19.242319 | orchestrator | Thursday 05 February 2026 02:46:04 +0000 (0:00:01.091) 0:00:23.667 ***** 2026-02-05 02:46:19.242327 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:46:19.242334 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:46:19.242341 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:46:19.242349 | orchestrator | 2026-02-05 02:46:19.242437 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-05 02:46:19.242446 | orchestrator | Thursday 05 February 2026 02:46:04 +0000 (0:00:00.604) 0:00:24.272 ***** 2026-02-05 02:46:19.242464 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 02:46:19.242472 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 02:46:19.242481 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 02:46:19.242489 | orchestrator | 2026-02-05 02:46:19.242497 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-05 02:46:19.242507 | orchestrator | Thursday 05 February 2026 02:46:05 +0000 (0:00:01.000) 0:00:25.272 ***** 2026-02-05 02:46:19.242516 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:46:19.242525 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:46:19.242534 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:46:19.242542 | orchestrator | 2026-02-05 02:46:19.242550 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-05 02:46:19.242558 | orchestrator | Thursday 05 February 2026 02:46:06 +0000 (0:00:00.322) 0:00:25.595 ***** 2026-02-05 02:46:19.242567 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-05 02:46:19.242576 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-05 02:46:19.242585 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-05 02:46:19.242593 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-05 02:46:19.242602 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-05 02:46:19.242610 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-05 02:46:19.242619 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-05 02:46:19.242628 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-05 02:46:19.242657 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-05 02:46:19.242675 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-05 02:46:19.242685 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-05 02:46:19.242696 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-05 02:46:19.242706 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-05 02:46:19.242716 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-05 02:46:19.242726 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-05 02:46:19.242736 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 02:46:19.242758 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 02:46:19.242768 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 02:46:19.242779 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 02:46:19.242790 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 02:46:19.242799 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 02:46:19.242807 | orchestrator | 2026-02-05 02:46:19.242814 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-05 02:46:19.242822 | orchestrator | Thursday 05 February 2026 02:46:14 +0000 (0:00:08.506) 0:00:34.102 ***** 2026-02-05 02:46:19.242829 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 02:46:19.242836 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 02:46:19.242844 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 02:46:19.242852 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 02:46:19.242859 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 02:46:19.242866 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 02:46:19.242873 | orchestrator | 2026-02-05 02:46:19.242881 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-05 02:46:19.242896 | orchestrator | Thursday 05 February 2026 02:46:17 +0000 (0:00:02.506) 0:00:36.608 ***** 2026-02-05 02:46:19.242908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:46:19.242925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:47:58.015719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 02:47:58.015848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 02:47:58.015883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 02:47:58.015902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 02:47:58.015925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:47:58.015967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:47:58.015994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 02:47:58.016010 | orchestrator | 2026-02-05 02:47:58.016026 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 02:47:58.016043 | orchestrator | Thursday 05 February 2026 02:46:19 +0000 (0:00:02.211) 0:00:38.820 ***** 2026-02-05 02:47:58.016057 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:47:58.016073 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:47:58.016086 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:47:58.016100 | orchestrator | 2026-02-05 02:47:58.016114 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-05 02:47:58.016127 | orchestrator | Thursday 05 February 2026 02:46:19 +0000 (0:00:00.251) 0:00:39.071 ***** 2026-02-05 02:47:58.016141 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:47:58.016155 | orchestrator | 2026-02-05 02:47:58.016169 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-05 02:47:58.016184 | orchestrator | Thursday 05 February 2026 02:46:21 +0000 (0:00:02.511) 0:00:41.582 ***** 2026-02-05 02:47:58.016199 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:47:58.016213 | orchestrator | 2026-02-05 02:47:58.016229 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-05 02:47:58.016301 | orchestrator | Thursday 05 February 2026 02:46:24 +0000 (0:00:02.325) 0:00:43.908 ***** 2026-02-05 02:47:58.016318 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:47:58.016335 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:47:58.016351 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:47:58.016366 | orchestrator | 2026-02-05 02:47:58.016380 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-05 02:47:58.016391 | orchestrator | Thursday 05 February 2026 02:46:25 +0000 (0:00:00.770) 0:00:44.679 ***** 2026-02-05 02:47:58.016402 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:47:58.016412 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:47:58.016422 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:47:58.016432 | orchestrator | 2026-02-05 02:47:58.016443 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-05 02:47:58.016461 | orchestrator | Thursday 05 February 2026 02:46:25 +0000 (0:00:00.333) 0:00:45.013 ***** 2026-02-05 02:47:58.016472 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:47:58.016483 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:47:58.016493 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:47:58.016503 | orchestrator | 2026-02-05 02:47:58.016514 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-05 02:47:58.016525 | orchestrator | Thursday 05 February 2026 02:46:25 +0000 (0:00:00.320) 0:00:45.333 ***** 2026-02-05 02:47:58.016535 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:47:58.016546 | orchestrator | 2026-02-05 02:47:58.016556 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-05 02:47:58.016567 | orchestrator | Thursday 05 February 2026 02:46:41 +0000 (0:00:15.395) 0:01:00.729 ***** 2026-02-05 02:47:58.016577 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:47:58.016587 | orchestrator | 2026-02-05 02:47:58.016598 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-05 02:47:58.016608 | orchestrator | Thursday 05 February 2026 02:46:52 +0000 (0:00:11.265) 0:01:11.995 ***** 2026-02-05 02:47:58.016631 | orchestrator | 2026-02-05 02:47:58.016640 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-05 02:47:58.016649 | orchestrator | Thursday 05 February 2026 02:46:52 +0000 (0:00:00.067) 0:01:12.063 ***** 2026-02-05 02:47:58.016658 | orchestrator | 2026-02-05 02:47:58.016667 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-05 02:47:58.016675 | orchestrator | Thursday 05 February 2026 02:46:52 +0000 (0:00:00.068) 0:01:12.131 ***** 2026-02-05 02:47:58.016684 | orchestrator | 2026-02-05 02:47:58.016693 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-05 02:47:58.016701 | orchestrator | Thursday 05 February 2026 02:46:52 +0000 (0:00:00.069) 0:01:12.201 ***** 2026-02-05 02:47:58.016710 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:47:58.016719 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:47:58.016730 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:47:58.016745 | orchestrator | 2026-02-05 02:47:58.016758 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-05 02:47:58.016772 | orchestrator | Thursday 05 February 2026 02:47:39 +0000 (0:00:47.079) 0:01:59.281 ***** 2026-02-05 02:47:58.016785 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:47:58.016800 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:47:58.016815 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:47:58.016829 | orchestrator | 2026-02-05 02:47:58.016843 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-05 02:47:58.016858 | orchestrator | Thursday 05 February 2026 02:47:49 +0000 (0:00:09.750) 0:02:09.032 ***** 2026-02-05 02:47:58.016877 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:47:58.016896 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:47:58.016910 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:47:58.016924 | orchestrator | 2026-02-05 02:47:58.016938 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 02:47:58.016950 | orchestrator | Thursday 05 February 2026 02:47:57 +0000 (0:00:07.956) 0:02:16.988 ***** 2026-02-05 02:47:58.016978 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:48:50.914744 | orchestrator | 2026-02-05 02:48:50.914846 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-05 02:48:50.914863 | orchestrator | Thursday 05 February 2026 02:47:58 +0000 (0:00:00.604) 0:02:17.592 ***** 2026-02-05 02:48:50.914876 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:48:50.914886 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:48:50.914894 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:48:50.914903 | orchestrator | 2026-02-05 02:48:50.914911 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-05 02:48:50.914919 | orchestrator | Thursday 05 February 2026 02:47:58 +0000 (0:00:00.722) 0:02:18.315 ***** 2026-02-05 02:48:50.914927 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:48:50.914936 | orchestrator | 2026-02-05 02:48:50.914945 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-05 02:48:50.914953 | orchestrator | Thursday 05 February 2026 02:48:00 +0000 (0:00:02.039) 0:02:20.354 ***** 2026-02-05 02:48:50.914961 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-05 02:48:50.914969 | orchestrator | 2026-02-05 02:48:50.914977 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-05 02:48:50.914985 | orchestrator | Thursday 05 February 2026 02:48:13 +0000 (0:00:13.140) 0:02:33.495 ***** 2026-02-05 02:48:50.914993 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-05 02:48:50.915001 | orchestrator | 2026-02-05 02:48:50.915008 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-05 02:48:50.915016 | orchestrator | Thursday 05 February 2026 02:48:38 +0000 (0:00:24.929) 0:02:58.424 ***** 2026-02-05 02:48:50.915024 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-05 02:48:50.915034 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-05 02:48:50.915062 | orchestrator | 2026-02-05 02:48:50.915071 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-05 02:48:50.915078 | orchestrator | Thursday 05 February 2026 02:48:45 +0000 (0:00:06.944) 0:03:05.369 ***** 2026-02-05 02:48:50.915086 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:48:50.915094 | orchestrator | 2026-02-05 02:48:50.915102 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-05 02:48:50.915110 | orchestrator | Thursday 05 February 2026 02:48:45 +0000 (0:00:00.136) 0:03:05.506 ***** 2026-02-05 02:48:50.915118 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:48:50.915125 | orchestrator | 2026-02-05 02:48:50.915133 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-05 02:48:50.915141 | orchestrator | Thursday 05 February 2026 02:48:46 +0000 (0:00:00.114) 0:03:05.620 ***** 2026-02-05 02:48:50.915149 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:48:50.915157 | orchestrator | 2026-02-05 02:48:50.915178 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-05 02:48:50.915272 | orchestrator | Thursday 05 February 2026 02:48:46 +0000 (0:00:00.138) 0:03:05.759 ***** 2026-02-05 02:48:50.915288 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:48:50.915302 | orchestrator | 2026-02-05 02:48:50.915317 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-05 02:48:50.915333 | orchestrator | Thursday 05 February 2026 02:48:46 +0000 (0:00:00.330) 0:03:06.090 ***** 2026-02-05 02:48:50.915347 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:48:50.915359 | orchestrator | 2026-02-05 02:48:50.915369 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 02:48:50.915378 | orchestrator | Thursday 05 February 2026 02:48:49 +0000 (0:00:03.354) 0:03:09.444 ***** 2026-02-05 02:48:50.915388 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:48:50.915397 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:48:50.915406 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:48:50.915416 | orchestrator | 2026-02-05 02:48:50.915425 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:48:50.915435 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 02:48:50.915446 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 02:48:50.915456 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 02:48:50.915466 | orchestrator | 2026-02-05 02:48:50.915473 | orchestrator | 2026-02-05 02:48:50.915482 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:48:50.915490 | orchestrator | Thursday 05 February 2026 02:48:50 +0000 (0:00:00.699) 0:03:10.143 ***** 2026-02-05 02:48:50.915497 | orchestrator | =============================================================================== 2026-02-05 02:48:50.915505 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 47.08s 2026-02-05 02:48:50.915513 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.93s 2026-02-05 02:48:50.915521 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.40s 2026-02-05 02:48:50.915531 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.14s 2026-02-05 02:48:50.915543 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.27s 2026-02-05 02:48:50.915556 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.75s 2026-02-05 02:48:50.915568 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.51s 2026-02-05 02:48:50.915581 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.96s 2026-02-05 02:48:50.915606 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.94s 2026-02-05 02:48:50.915639 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.51s 2026-02-05 02:48:50.915649 | orchestrator | keystone : Creating default user role ----------------------------------- 3.35s 2026-02-05 02:48:50.915657 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.21s 2026-02-05 02:48:50.915664 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.09s 2026-02-05 02:48:50.915672 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.51s 2026-02-05 02:48:50.915680 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.51s 2026-02-05 02:48:50.915702 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.33s 2026-02-05 02:48:50.915710 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.21s 2026-02-05 02:48:50.915718 | orchestrator | keystone : Run key distribution ----------------------------------------- 2.04s 2026-02-05 02:48:50.915726 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.79s 2026-02-05 02:48:50.915743 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.46s 2026-02-05 02:48:53.182698 | orchestrator | 2026-02-05 02:48:53 | INFO  | Task bb38fe2d-376e-4f8f-8d69-9fcc5f233744 (placement) was prepared for execution. 2026-02-05 02:48:53.182827 | orchestrator | 2026-02-05 02:48:53 | INFO  | It takes a moment until task bb38fe2d-376e-4f8f-8d69-9fcc5f233744 (placement) has been started and output is visible here. 2026-02-05 02:49:29.171226 | orchestrator | 2026-02-05 02:49:29.171348 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 02:49:29.171364 | orchestrator | 2026-02-05 02:49:29.171377 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 02:49:29.171389 | orchestrator | Thursday 05 February 2026 02:48:57 +0000 (0:00:00.252) 0:00:00.252 ***** 2026-02-05 02:49:29.171400 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:49:29.171412 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:49:29.171423 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:49:29.171434 | orchestrator | 2026-02-05 02:49:29.171446 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 02:49:29.171457 | orchestrator | Thursday 05 February 2026 02:48:57 +0000 (0:00:00.301) 0:00:00.554 ***** 2026-02-05 02:49:29.171469 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-05 02:49:29.171481 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-05 02:49:29.171492 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-05 02:49:29.171502 | orchestrator | 2026-02-05 02:49:29.171530 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-05 02:49:29.171542 | orchestrator | 2026-02-05 02:49:29.171553 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-05 02:49:29.171564 | orchestrator | Thursday 05 February 2026 02:48:57 +0000 (0:00:00.434) 0:00:00.989 ***** 2026-02-05 02:49:29.171575 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:49:29.171587 | orchestrator | 2026-02-05 02:49:29.171598 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-05 02:49:29.171609 | orchestrator | Thursday 05 February 2026 02:48:58 +0000 (0:00:00.549) 0:00:01.538 ***** 2026-02-05 02:49:29.171621 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-05 02:49:29.171631 | orchestrator | 2026-02-05 02:49:29.171642 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-05 02:49:29.171653 | orchestrator | Thursday 05 February 2026 02:49:02 +0000 (0:00:04.110) 0:00:05.649 ***** 2026-02-05 02:49:29.171664 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-05 02:49:29.171700 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-05 02:49:29.171712 | orchestrator | 2026-02-05 02:49:29.171723 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-05 02:49:29.171735 | orchestrator | Thursday 05 February 2026 02:49:09 +0000 (0:00:06.608) 0:00:12.258 ***** 2026-02-05 02:49:29.171749 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-05 02:49:29.171761 | orchestrator | 2026-02-05 02:49:29.171774 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-05 02:49:29.171786 | orchestrator | Thursday 05 February 2026 02:49:13 +0000 (0:00:03.939) 0:00:16.198 ***** 2026-02-05 02:49:29.171799 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 02:49:29.171813 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-05 02:49:29.171825 | orchestrator | 2026-02-05 02:49:29.171838 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-05 02:49:29.171851 | orchestrator | Thursday 05 February 2026 02:49:17 +0000 (0:00:04.232) 0:00:20.431 ***** 2026-02-05 02:49:29.171864 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 02:49:29.171877 | orchestrator | 2026-02-05 02:49:29.171890 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-05 02:49:29.171903 | orchestrator | Thursday 05 February 2026 02:49:20 +0000 (0:00:03.407) 0:00:23.838 ***** 2026-02-05 02:49:29.171916 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-05 02:49:29.171929 | orchestrator | 2026-02-05 02:49:29.171941 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-05 02:49:29.171954 | orchestrator | Thursday 05 February 2026 02:49:25 +0000 (0:00:04.248) 0:00:28.087 ***** 2026-02-05 02:49:29.171967 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:49:29.171980 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:49:29.171992 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:49:29.172005 | orchestrator | 2026-02-05 02:49:29.172018 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-05 02:49:29.172030 | orchestrator | Thursday 05 February 2026 02:49:25 +0000 (0:00:00.308) 0:00:28.396 ***** 2026-02-05 02:49:29.172047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:49:29.172089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:49:29.172112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:49:29.172125 | orchestrator | 2026-02-05 02:49:29.172137 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-05 02:49:29.172172 | orchestrator | Thursday 05 February 2026 02:49:26 +0000 (0:00:00.819) 0:00:29.215 ***** 2026-02-05 02:49:29.172192 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:49:29.172211 | orchestrator | 2026-02-05 02:49:29.172231 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-05 02:49:29.172249 | orchestrator | Thursday 05 February 2026 02:49:26 +0000 (0:00:00.350) 0:00:29.565 ***** 2026-02-05 02:49:29.172266 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:49:29.172278 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:49:29.172289 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:49:29.172300 | orchestrator | 2026-02-05 02:49:29.172311 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-05 02:49:29.172321 | orchestrator | Thursday 05 February 2026 02:49:26 +0000 (0:00:00.321) 0:00:29.887 ***** 2026-02-05 02:49:29.172332 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:49:29.172343 | orchestrator | 2026-02-05 02:49:29.172354 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-05 02:49:29.172365 | orchestrator | Thursday 05 February 2026 02:49:27 +0000 (0:00:00.530) 0:00:30.418 ***** 2026-02-05 02:49:29.172377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:49:29.172400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:49:32.284876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:49:32.284982 | orchestrator | 2026-02-05 02:49:32.284999 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-05 02:49:32.285012 | orchestrator | Thursday 05 February 2026 02:49:29 +0000 (0:00:01.763) 0:00:32.181 ***** 2026-02-05 02:49:32.285025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 02:49:32.285038 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:49:32.285050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 02:49:32.285062 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:49:32.285074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 02:49:32.285107 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:49:32.285118 | orchestrator | 2026-02-05 02:49:32.285130 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-05 02:49:32.285208 | orchestrator | Thursday 05 February 2026 02:49:29 +0000 (0:00:00.610) 0:00:32.792 ***** 2026-02-05 02:49:32.285230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 02:49:32.285243 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:49:32.285255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 02:49:32.285267 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:49:32.285279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 02:49:32.285290 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:49:32.285301 | orchestrator | 2026-02-05 02:49:32.285313 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-05 02:49:32.285324 | orchestrator | Thursday 05 February 2026 02:49:30 +0000 (0:00:00.714) 0:00:33.507 ***** 2026-02-05 02:49:32.285335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:49:32.285371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:49:39.185890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:49:39.186097 | orchestrator | 2026-02-05 02:49:39.186129 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-05 02:49:39.186183 | orchestrator | Thursday 05 February 2026 02:49:32 +0000 (0:00:01.792) 0:00:35.300 ***** 2026-02-05 02:49:39.186247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:49:39.186272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:49:39.186348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:49:39.186369 | orchestrator | 2026-02-05 02:49:39.186388 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-05 02:49:39.186409 | orchestrator | Thursday 05 February 2026 02:49:34 +0000 (0:00:02.198) 0:00:37.498 ***** 2026-02-05 02:49:39.186454 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-05 02:49:39.186476 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-05 02:49:39.186496 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-05 02:49:39.186515 | orchestrator | 2026-02-05 02:49:39.186535 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-05 02:49:39.186555 | orchestrator | Thursday 05 February 2026 02:49:35 +0000 (0:00:01.432) 0:00:38.931 ***** 2026-02-05 02:49:39.186574 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:49:39.186594 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:49:39.186612 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:49:39.186631 | orchestrator | 2026-02-05 02:49:39.186651 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-05 02:49:39.186671 | orchestrator | Thursday 05 February 2026 02:49:37 +0000 (0:00:01.333) 0:00:40.265 ***** 2026-02-05 02:49:39.186692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 02:49:39.186713 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:49:39.186734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 02:49:39.186767 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:49:39.186787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 02:49:39.186806 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:49:39.186823 | orchestrator | 2026-02-05 02:49:39.186842 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-05 02:49:39.186866 | orchestrator | Thursday 05 February 2026 02:49:38 +0000 (0:00:00.903) 0:00:41.169 ***** 2026-02-05 02:49:39.186901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:50:03.571780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:50:03.571922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 02:50:03.571940 | orchestrator | 2026-02-05 02:50:03.571955 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-05 02:50:03.571968 | orchestrator | Thursday 05 February 2026 02:49:39 +0000 (0:00:01.036) 0:00:42.205 ***** 2026-02-05 02:50:03.571980 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:50:03.571992 | orchestrator | 2026-02-05 02:50:03.572003 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-05 02:50:03.572015 | orchestrator | Thursday 05 February 2026 02:49:41 +0000 (0:00:02.283) 0:00:44.489 ***** 2026-02-05 02:50:03.572026 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:50:03.572037 | orchestrator | 2026-02-05 02:50:03.572048 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-05 02:50:03.572059 | orchestrator | Thursday 05 February 2026 02:49:43 +0000 (0:00:02.355) 0:00:46.845 ***** 2026-02-05 02:50:03.572070 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:50:03.572081 | orchestrator | 2026-02-05 02:50:03.572092 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-05 02:50:03.572103 | orchestrator | Thursday 05 February 2026 02:49:57 +0000 (0:00:14.115) 0:01:00.960 ***** 2026-02-05 02:50:03.572114 | orchestrator | 2026-02-05 02:50:03.572178 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-05 02:50:03.572190 | orchestrator | Thursday 05 February 2026 02:49:58 +0000 (0:00:00.068) 0:01:01.029 ***** 2026-02-05 02:50:03.572201 | orchestrator | 2026-02-05 02:50:03.572212 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-05 02:50:03.572223 | orchestrator | Thursday 05 February 2026 02:49:58 +0000 (0:00:00.072) 0:01:01.101 ***** 2026-02-05 02:50:03.572234 | orchestrator | 2026-02-05 02:50:03.572245 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-05 02:50:03.572256 | orchestrator | Thursday 05 February 2026 02:49:58 +0000 (0:00:00.065) 0:01:01.167 ***** 2026-02-05 02:50:03.572267 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:50:03.572279 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:50:03.572304 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:50:03.572318 | orchestrator | 2026-02-05 02:50:03.572332 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:50:03.572347 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 02:50:03.572361 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 02:50:03.572375 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 02:50:03.572389 | orchestrator | 2026-02-05 02:50:03.572401 | orchestrator | 2026-02-05 02:50:03.572415 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:50:03.572428 | orchestrator | Thursday 05 February 2026 02:50:03 +0000 (0:00:05.074) 0:01:06.241 ***** 2026-02-05 02:50:03.572449 | orchestrator | =============================================================================== 2026-02-05 02:50:03.572463 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.12s 2026-02-05 02:50:03.572493 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.61s 2026-02-05 02:50:03.572507 | orchestrator | placement : Restart placement-api container ----------------------------- 5.07s 2026-02-05 02:50:03.572520 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.25s 2026-02-05 02:50:03.572533 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.23s 2026-02-05 02:50:03.572546 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.11s 2026-02-05 02:50:03.572560 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.94s 2026-02-05 02:50:03.572571 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.41s 2026-02-05 02:50:03.572582 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.36s 2026-02-05 02:50:03.572593 | orchestrator | placement : Creating placement databases -------------------------------- 2.28s 2026-02-05 02:50:03.572604 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.20s 2026-02-05 02:50:03.572615 | orchestrator | placement : Copying over config.json files for services ----------------- 1.79s 2026-02-05 02:50:03.572626 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.76s 2026-02-05 02:50:03.572637 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.43s 2026-02-05 02:50:03.572648 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.33s 2026-02-05 02:50:03.572659 | orchestrator | placement : Check placement containers ---------------------------------- 1.04s 2026-02-05 02:50:03.572670 | orchestrator | placement : Copying over existing policy file --------------------------- 0.90s 2026-02-05 02:50:03.572680 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.82s 2026-02-05 02:50:03.572692 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2026-02-05 02:50:03.572703 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.61s 2026-02-05 02:50:05.903305 | orchestrator | 2026-02-05 02:50:05 | INFO  | Task fb09bdce-4bf6-4c55-bbbc-0d50092b6ee8 (neutron) was prepared for execution. 2026-02-05 02:50:05.903386 | orchestrator | 2026-02-05 02:50:05 | INFO  | It takes a moment until task fb09bdce-4bf6-4c55-bbbc-0d50092b6ee8 (neutron) has been started and output is visible here. 2026-02-05 02:50:55.097900 | orchestrator | 2026-02-05 02:50:55.098010 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 02:50:55.098070 | orchestrator | 2026-02-05 02:50:55.098129 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 02:50:55.098139 | orchestrator | Thursday 05 February 2026 02:50:10 +0000 (0:00:00.259) 0:00:00.259 ***** 2026-02-05 02:50:55.098148 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:50:55.098158 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:50:55.098168 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:50:55.098191 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:50:55.098200 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:50:55.098209 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:50:55.098218 | orchestrator | 2026-02-05 02:50:55.098227 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 02:50:55.098236 | orchestrator | Thursday 05 February 2026 02:50:10 +0000 (0:00:00.714) 0:00:00.974 ***** 2026-02-05 02:50:55.098244 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-05 02:50:55.098253 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-05 02:50:55.098261 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-05 02:50:55.098270 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-05 02:50:55.098280 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-05 02:50:55.098317 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-05 02:50:55.098327 | orchestrator | 2026-02-05 02:50:55.098335 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-05 02:50:55.098344 | orchestrator | 2026-02-05 02:50:55.098352 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-05 02:50:55.098361 | orchestrator | Thursday 05 February 2026 02:50:11 +0000 (0:00:00.595) 0:00:01.569 ***** 2026-02-05 02:50:55.098384 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:50:55.098394 | orchestrator | 2026-02-05 02:50:55.098403 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-05 02:50:55.098426 | orchestrator | Thursday 05 February 2026 02:50:12 +0000 (0:00:01.074) 0:00:02.644 ***** 2026-02-05 02:50:55.098435 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:50:55.098444 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:50:55.098453 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:50:55.098462 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:50:55.098473 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:50:55.098483 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:50:55.098492 | orchestrator | 2026-02-05 02:50:55.098501 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-05 02:50:55.098510 | orchestrator | Thursday 05 February 2026 02:50:13 +0000 (0:00:01.135) 0:00:03.779 ***** 2026-02-05 02:50:55.098519 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:50:55.098527 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:50:55.098536 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:50:55.098545 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:50:55.098553 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:50:55.098562 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:50:55.098571 | orchestrator | 2026-02-05 02:50:55.098581 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-05 02:50:55.098591 | orchestrator | Thursday 05 February 2026 02:50:14 +0000 (0:00:00.968) 0:00:04.748 ***** 2026-02-05 02:50:55.098601 | orchestrator | ok: [testbed-node-0] => { 2026-02-05 02:50:55.098612 | orchestrator |  "changed": false, 2026-02-05 02:50:55.098621 | orchestrator |  "msg": "All assertions passed" 2026-02-05 02:50:55.098643 | orchestrator | } 2026-02-05 02:50:55.098653 | orchestrator | ok: [testbed-node-1] => { 2026-02-05 02:50:55.098662 | orchestrator |  "changed": false, 2026-02-05 02:50:55.098671 | orchestrator |  "msg": "All assertions passed" 2026-02-05 02:50:55.098681 | orchestrator | } 2026-02-05 02:50:55.098689 | orchestrator | ok: [testbed-node-2] => { 2026-02-05 02:50:55.098697 | orchestrator |  "changed": false, 2026-02-05 02:50:55.098706 | orchestrator |  "msg": "All assertions passed" 2026-02-05 02:50:55.098715 | orchestrator | } 2026-02-05 02:50:55.098724 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 02:50:55.098734 | orchestrator |  "changed": false, 2026-02-05 02:50:55.098744 | orchestrator |  "msg": "All assertions passed" 2026-02-05 02:50:55.098753 | orchestrator | } 2026-02-05 02:50:55.098763 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 02:50:55.098772 | orchestrator |  "changed": false, 2026-02-05 02:50:55.098781 | orchestrator |  "msg": "All assertions passed" 2026-02-05 02:50:55.098790 | orchestrator | } 2026-02-05 02:50:55.098799 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 02:50:55.098808 | orchestrator |  "changed": false, 2026-02-05 02:50:55.098817 | orchestrator |  "msg": "All assertions passed" 2026-02-05 02:50:55.098826 | orchestrator | } 2026-02-05 02:50:55.098835 | orchestrator | 2026-02-05 02:50:55.098844 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-05 02:50:55.098854 | orchestrator | Thursday 05 February 2026 02:50:15 +0000 (0:00:00.693) 0:00:05.442 ***** 2026-02-05 02:50:55.098863 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:50:55.098871 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:50:55.098880 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:50:55.098901 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:50:55.098909 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:50:55.098917 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:50:55.098926 | orchestrator | 2026-02-05 02:50:55.098934 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-05 02:50:55.098943 | orchestrator | Thursday 05 February 2026 02:50:15 +0000 (0:00:00.550) 0:00:05.992 ***** 2026-02-05 02:50:55.098952 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-05 02:50:55.098961 | orchestrator | 2026-02-05 02:50:55.098970 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-05 02:50:55.098979 | orchestrator | Thursday 05 February 2026 02:50:19 +0000 (0:00:03.908) 0:00:09.901 ***** 2026-02-05 02:50:55.098988 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-05 02:50:55.098999 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-05 02:50:55.099008 | orchestrator | 2026-02-05 02:50:55.099037 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-05 02:50:55.099046 | orchestrator | Thursday 05 February 2026 02:50:26 +0000 (0:00:06.873) 0:00:16.774 ***** 2026-02-05 02:50:55.099055 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 02:50:55.099064 | orchestrator | 2026-02-05 02:50:55.099073 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-05 02:50:55.099101 | orchestrator | Thursday 05 February 2026 02:50:30 +0000 (0:00:03.380) 0:00:20.155 ***** 2026-02-05 02:50:55.099110 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 02:50:55.099120 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-05 02:50:55.099129 | orchestrator | 2026-02-05 02:50:55.099137 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-05 02:50:55.099145 | orchestrator | Thursday 05 February 2026 02:50:34 +0000 (0:00:04.358) 0:00:24.513 ***** 2026-02-05 02:50:55.099153 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 02:50:55.099161 | orchestrator | 2026-02-05 02:50:55.099169 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-05 02:50:55.099178 | orchestrator | Thursday 05 February 2026 02:50:37 +0000 (0:00:03.291) 0:00:27.805 ***** 2026-02-05 02:50:55.099187 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-05 02:50:55.099196 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-05 02:50:55.099205 | orchestrator | 2026-02-05 02:50:55.099214 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-05 02:50:55.099224 | orchestrator | Thursday 05 February 2026 02:50:45 +0000 (0:00:08.073) 0:00:35.879 ***** 2026-02-05 02:50:55.099232 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:50:55.099242 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:50:55.099251 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:50:55.099260 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:50:55.099269 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:50:55.099287 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:50:55.099297 | orchestrator | 2026-02-05 02:50:55.099306 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-05 02:50:55.099316 | orchestrator | Thursday 05 February 2026 02:50:46 +0000 (0:00:00.742) 0:00:36.622 ***** 2026-02-05 02:50:55.099325 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:50:55.099334 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:50:55.099343 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:50:55.099352 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:50:55.099360 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:50:55.099369 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:50:55.099377 | orchestrator | 2026-02-05 02:50:55.099386 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-05 02:50:55.099395 | orchestrator | Thursday 05 February 2026 02:50:48 +0000 (0:00:02.131) 0:00:38.754 ***** 2026-02-05 02:50:55.099415 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:50:55.099425 | orchestrator | ok: [testbed-node-2] 2026-02-05 02:50:55.099434 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:50:55.099443 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:50:55.099451 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:50:55.099461 | orchestrator | ok: [testbed-node-1] 2026-02-05 02:50:55.099470 | orchestrator | 2026-02-05 02:50:55.099479 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-05 02:50:55.099488 | orchestrator | Thursday 05 February 2026 02:50:50 +0000 (0:00:01.571) 0:00:40.325 ***** 2026-02-05 02:50:55.099497 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:50:55.099507 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:50:55.099516 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:50:55.099525 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:50:55.099534 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:50:55.099543 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:50:55.099552 | orchestrator | 2026-02-05 02:50:55.099561 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-05 02:50:55.099571 | orchestrator | Thursday 05 February 2026 02:50:52 +0000 (0:00:02.049) 0:00:42.375 ***** 2026-02-05 02:50:55.099584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:50:55.099610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:00.568188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:00.568347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:00.568393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:51:00.568409 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:51:00.568421 | orchestrator | 2026-02-05 02:51:00.568435 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-05 02:51:00.568448 | orchestrator | Thursday 05 February 2026 02:50:55 +0000 (0:00:02.788) 0:00:45.163 ***** 2026-02-05 02:51:00.568459 | orchestrator | [WARNING]: Skipped 2026-02-05 02:51:00.568472 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-05 02:51:00.568484 | orchestrator | due to this access issue: 2026-02-05 02:51:00.568496 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-05 02:51:00.568506 | orchestrator | a directory 2026-02-05 02:51:00.568518 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 02:51:00.568529 | orchestrator | 2026-02-05 02:51:00.568540 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-05 02:51:00.568551 | orchestrator | Thursday 05 February 2026 02:50:55 +0000 (0:00:00.856) 0:00:46.020 ***** 2026-02-05 02:51:00.568581 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:51:00.568594 | orchestrator | 2026-02-05 02:51:00.568605 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-05 02:51:00.568617 | orchestrator | Thursday 05 February 2026 02:50:57 +0000 (0:00:01.231) 0:00:47.252 ***** 2026-02-05 02:51:00.568637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:00.568660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:00.568674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:00.568688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:51:00.568711 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:51:05.306336 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:51:05.306440 | orchestrator | 2026-02-05 02:51:05.306457 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-05 02:51:05.306469 | orchestrator | Thursday 05 February 2026 02:51:00 +0000 (0:00:03.386) 0:00:50.638 ***** 2026-02-05 02:51:05.306481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:05.306494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:05.306505 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:05.306516 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:05.306526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:05.306536 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:05.306591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:05.306603 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:05.306619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:05.306629 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:05.306639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:05.306649 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:05.306659 | orchestrator | 2026-02-05 02:51:05.306669 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-05 02:51:05.306679 | orchestrator | Thursday 05 February 2026 02:51:02 +0000 (0:00:02.049) 0:00:52.687 ***** 2026-02-05 02:51:05.306689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:05.306699 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:05.306715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:10.298800 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:10.298926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:10.298945 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:10.298959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:10.298970 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:10.298981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:10.298992 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:10.299003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:10.299035 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:10.299046 | orchestrator | 2026-02-05 02:51:10.299057 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-05 02:51:10.299154 | orchestrator | Thursday 05 February 2026 02:51:05 +0000 (0:00:02.689) 0:00:55.377 ***** 2026-02-05 02:51:10.299166 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:10.299177 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:10.299186 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:10.299196 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:10.299206 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:10.299216 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:10.299225 | orchestrator | 2026-02-05 02:51:10.299235 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-05 02:51:10.299245 | orchestrator | Thursday 05 February 2026 02:51:07 +0000 (0:00:02.384) 0:00:57.761 ***** 2026-02-05 02:51:10.299255 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:10.299265 | orchestrator | 2026-02-05 02:51:10.299275 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-05 02:51:10.299302 | orchestrator | Thursday 05 February 2026 02:51:07 +0000 (0:00:00.136) 0:00:57.898 ***** 2026-02-05 02:51:10.299312 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:10.299322 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:10.299333 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:10.299346 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:10.299357 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:10.299368 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:10.299379 | orchestrator | 2026-02-05 02:51:10.299391 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-05 02:51:10.299402 | orchestrator | Thursday 05 February 2026 02:51:08 +0000 (0:00:00.521) 0:00:58.419 ***** 2026-02-05 02:51:10.299420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:10.299433 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:10.299445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:10.299466 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:10.299478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:10.299490 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:10.299503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:10.299515 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:10.299539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:18.174863 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:18.174969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:18.174987 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:18.174997 | orchestrator | 2026-02-05 02:51:18.175007 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-05 02:51:18.175017 | orchestrator | Thursday 05 February 2026 02:51:10 +0000 (0:00:01.949) 0:01:00.369 ***** 2026-02-05 02:51:18.175026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:18.175128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:18.175142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:18.175191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:51:18.175202 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:51:18.175219 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:51:18.175228 | orchestrator | 2026-02-05 02:51:18.175236 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-05 02:51:18.175245 | orchestrator | Thursday 05 February 2026 02:51:13 +0000 (0:00:02.782) 0:01:03.152 ***** 2026-02-05 02:51:18.175254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:18.175263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:18.175286 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:51:22.863376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:22.863497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:51:22.863511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:51:22.863520 | orchestrator | 2026-02-05 02:51:22.863530 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-05 02:51:22.863539 | orchestrator | Thursday 05 February 2026 02:51:18 +0000 (0:00:05.092) 0:01:08.244 ***** 2026-02-05 02:51:22.863548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:22.863568 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:22.863594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:22.863610 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:22.863619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:22.863627 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:22.863635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:22.863644 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:22.863652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:22.863661 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:22.863673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:22.863681 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:22.863689 | orchestrator | 2026-02-05 02:51:22.863698 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-05 02:51:22.863712 | orchestrator | Thursday 05 February 2026 02:51:20 +0000 (0:00:01.856) 0:01:10.101 ***** 2026-02-05 02:51:22.863721 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:22.863729 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:22.863737 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:22.863745 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:51:22.863753 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:51:22.863765 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:51:41.810860 | orchestrator | 2026-02-05 02:51:41.810946 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-05 02:51:41.810956 | orchestrator | Thursday 05 February 2026 02:51:22 +0000 (0:00:02.833) 0:01:12.934 ***** 2026-02-05 02:51:41.810965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:41.810975 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:41.810982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:41.810988 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:41.810994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:41.811000 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:41.811007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:41.811086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:41.811095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:51:41.811101 | orchestrator | 2026-02-05 02:51:41.811107 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-05 02:51:41.811113 | orchestrator | Thursday 05 February 2026 02:51:26 +0000 (0:00:03.503) 0:01:16.438 ***** 2026-02-05 02:51:41.811119 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:41.811125 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:41.811131 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:41.811137 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:41.811143 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:41.811149 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:41.811154 | orchestrator | 2026-02-05 02:51:41.811160 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-05 02:51:41.811166 | orchestrator | Thursday 05 February 2026 02:51:28 +0000 (0:00:02.282) 0:01:18.721 ***** 2026-02-05 02:51:41.811172 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:41.811178 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:41.811183 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:41.811189 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:41.811195 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:41.811201 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:41.811207 | orchestrator | 2026-02-05 02:51:41.811213 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-05 02:51:41.811218 | orchestrator | Thursday 05 February 2026 02:51:30 +0000 (0:00:02.076) 0:01:20.798 ***** 2026-02-05 02:51:41.811224 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:41.811230 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:41.811236 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:41.811242 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:41.811248 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:41.811253 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:41.811259 | orchestrator | 2026-02-05 02:51:41.811265 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-05 02:51:41.811277 | orchestrator | Thursday 05 February 2026 02:51:33 +0000 (0:00:02.308) 0:01:23.107 ***** 2026-02-05 02:51:41.811283 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:41.811289 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:41.811294 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:41.811300 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:41.811306 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:41.811311 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:41.811317 | orchestrator | 2026-02-05 02:51:41.811323 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-05 02:51:41.811329 | orchestrator | Thursday 05 February 2026 02:51:35 +0000 (0:00:02.030) 0:01:25.137 ***** 2026-02-05 02:51:41.811335 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:41.811341 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:41.811346 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:41.811352 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:41.811358 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:41.811364 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:41.811370 | orchestrator | 2026-02-05 02:51:41.811375 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-05 02:51:41.811381 | orchestrator | Thursday 05 February 2026 02:51:37 +0000 (0:00:02.365) 0:01:27.503 ***** 2026-02-05 02:51:41.811387 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:41.811393 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:41.811399 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:41.811404 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:41.811413 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:41.811420 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:41.811426 | orchestrator | 2026-02-05 02:51:41.811433 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-05 02:51:41.811440 | orchestrator | Thursday 05 February 2026 02:51:39 +0000 (0:00:02.217) 0:01:29.721 ***** 2026-02-05 02:51:41.811447 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 02:51:41.811454 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:41.811462 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 02:51:41.811469 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:41.811475 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 02:51:41.811486 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:45.939966 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 02:51:45.940150 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:45.940170 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 02:51:45.940182 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:45.940194 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 02:51:45.940205 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:45.940216 | orchestrator | 2026-02-05 02:51:45.940228 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-05 02:51:45.940239 | orchestrator | Thursday 05 February 2026 02:51:41 +0000 (0:00:02.154) 0:01:31.875 ***** 2026-02-05 02:51:45.940253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:45.940295 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:45.940308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:45.940320 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:51:45.940332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:45.940343 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:51:45.940387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:45.940402 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:51:45.940414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:45.940433 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:51:45.940445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:51:45.940456 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:51:45.940467 | orchestrator | 2026-02-05 02:51:45.940478 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-05 02:51:45.940489 | orchestrator | Thursday 05 February 2026 02:51:43 +0000 (0:00:01.950) 0:01:33.825 ***** 2026-02-05 02:51:45.940500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:45.940515 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:51:45.940536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:51:45.940569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:52:10.661264 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:52:10.661399 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:52:10.661419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:52:10.661435 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:52:10.661446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:52:10.661458 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:52:10.661470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:52:10.661481 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:52:10.661492 | orchestrator | 2026-02-05 02:52:10.661504 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-05 02:52:10.661515 | orchestrator | Thursday 05 February 2026 02:51:45 +0000 (0:00:02.183) 0:01:36.008 ***** 2026-02-05 02:52:10.661526 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:52:10.661537 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:52:10.661547 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:52:10.661558 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:52:10.661570 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:52:10.661580 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:52:10.661591 | orchestrator | 2026-02-05 02:52:10.661616 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-05 02:52:10.661628 | orchestrator | Thursday 05 February 2026 02:51:48 +0000 (0:00:02.193) 0:01:38.202 ***** 2026-02-05 02:52:10.661639 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:52:10.661650 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:52:10.661660 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:52:10.661671 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:52:10.661681 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:52:10.661692 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:52:10.661702 | orchestrator | 2026-02-05 02:52:10.661713 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-05 02:52:10.661733 | orchestrator | Thursday 05 February 2026 02:51:51 +0000 (0:00:03.558) 0:01:41.760 ***** 2026-02-05 02:52:10.661744 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:52:10.661755 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:52:10.661765 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:52:10.661776 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:52:10.661786 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:52:10.661797 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:52:10.661809 | orchestrator | 2026-02-05 02:52:10.661823 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-05 02:52:10.661837 | orchestrator | Thursday 05 February 2026 02:51:53 +0000 (0:00:02.091) 0:01:43.852 ***** 2026-02-05 02:52:10.661850 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:52:10.661862 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:52:10.661874 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:52:10.661887 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:52:10.661899 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:52:10.661912 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:52:10.661924 | orchestrator | 2026-02-05 02:52:10.661953 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-05 02:52:10.661966 | orchestrator | Thursday 05 February 2026 02:51:55 +0000 (0:00:02.227) 0:01:46.079 ***** 2026-02-05 02:52:10.662176 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:52:10.662192 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:52:10.662203 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:52:10.662214 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:52:10.662225 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:52:10.662235 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:52:10.662246 | orchestrator | 2026-02-05 02:52:10.662257 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-05 02:52:10.662268 | orchestrator | Thursday 05 February 2026 02:51:57 +0000 (0:00:01.991) 0:01:48.070 ***** 2026-02-05 02:52:10.662279 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:52:10.662289 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:52:10.662300 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:52:10.662311 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:52:10.662322 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:52:10.662332 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:52:10.662343 | orchestrator | 2026-02-05 02:52:10.662354 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-05 02:52:10.662364 | orchestrator | Thursday 05 February 2026 02:52:00 +0000 (0:00:02.060) 0:01:50.131 ***** 2026-02-05 02:52:10.662375 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:52:10.662386 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:52:10.662397 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:52:10.662407 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:52:10.662418 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:52:10.662429 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:52:10.662439 | orchestrator | 2026-02-05 02:52:10.662450 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-05 02:52:10.662461 | orchestrator | Thursday 05 February 2026 02:52:02 +0000 (0:00:02.065) 0:01:52.196 ***** 2026-02-05 02:52:10.662472 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:52:10.662482 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:52:10.662493 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:52:10.662504 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:52:10.662514 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:52:10.662525 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:52:10.662536 | orchestrator | 2026-02-05 02:52:10.662547 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-05 02:52:10.662557 | orchestrator | Thursday 05 February 2026 02:52:04 +0000 (0:00:02.071) 0:01:54.267 ***** 2026-02-05 02:52:10.662568 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:52:10.662591 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:52:10.662601 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:52:10.662612 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:52:10.662622 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:52:10.662633 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:52:10.662643 | orchestrator | 2026-02-05 02:52:10.662654 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-05 02:52:10.662665 | orchestrator | Thursday 05 February 2026 02:52:06 +0000 (0:00:02.105) 0:01:56.373 ***** 2026-02-05 02:52:10.662676 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 02:52:10.662687 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:52:10.662698 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 02:52:10.662709 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:52:10.662719 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 02:52:10.662730 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:52:10.662741 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 02:52:10.662751 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:52:10.662762 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 02:52:10.662773 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:52:10.662784 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 02:52:10.662801 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:52:10.662813 | orchestrator | 2026-02-05 02:52:10.662824 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-05 02:52:10.662835 | orchestrator | Thursday 05 February 2026 02:52:08 +0000 (0:00:02.159) 0:01:58.532 ***** 2026-02-05 02:52:10.662847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:52:10.662860 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:52:10.662883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:52:13.438012 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:52:13.438304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 02:52:13.438325 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:52:13.438340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:52:13.438353 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:52:13.438380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:52:13.438393 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:52:13.438404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 02:52:13.438416 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:52:13.438428 | orchestrator | 2026-02-05 02:52:13.438440 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-05 02:52:13.438453 | orchestrator | Thursday 05 February 2026 02:52:10 +0000 (0:00:02.194) 0:02:00.727 ***** 2026-02-05 02:52:13.438484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:52:13.438506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:52:13.438524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 02:52:13.438537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:52:13.438551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:52:13.438579 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 02:54:29.139848 | orchestrator | 2026-02-05 02:54:29.140020 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-05 02:54:29.140049 | orchestrator | Thursday 05 February 2026 02:52:13 +0000 (0:00:02.780) 0:02:03.507 ***** 2026-02-05 02:54:29.140071 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:54:29.140085 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:54:29.140096 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:54:29.140107 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:54:29.140118 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:54:29.140130 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:54:29.140141 | orchestrator | 2026-02-05 02:54:29.140159 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-05 02:54:29.140179 | orchestrator | Thursday 05 February 2026 02:52:13 +0000 (0:00:00.561) 0:02:04.069 ***** 2026-02-05 02:54:29.140196 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:54:29.140216 | orchestrator | 2026-02-05 02:54:29.140236 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-05 02:54:29.140256 | orchestrator | Thursday 05 February 2026 02:52:16 +0000 (0:00:02.141) 0:02:06.211 ***** 2026-02-05 02:54:29.140273 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:54:29.140285 | orchestrator | 2026-02-05 02:54:29.140296 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-05 02:54:29.140307 | orchestrator | Thursday 05 February 2026 02:52:18 +0000 (0:00:02.554) 0:02:08.765 ***** 2026-02-05 02:54:29.140318 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:54:29.140329 | orchestrator | 2026-02-05 02:54:29.140340 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 02:54:29.140351 | orchestrator | Thursday 05 February 2026 02:53:01 +0000 (0:00:42.341) 0:02:51.107 ***** 2026-02-05 02:54:29.140363 | orchestrator | 2026-02-05 02:54:29.140374 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 02:54:29.140386 | orchestrator | Thursday 05 February 2026 02:53:01 +0000 (0:00:00.072) 0:02:51.180 ***** 2026-02-05 02:54:29.140399 | orchestrator | 2026-02-05 02:54:29.140413 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 02:54:29.140425 | orchestrator | Thursday 05 February 2026 02:53:01 +0000 (0:00:00.070) 0:02:51.250 ***** 2026-02-05 02:54:29.140438 | orchestrator | 2026-02-05 02:54:29.140451 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 02:54:29.140464 | orchestrator | Thursday 05 February 2026 02:53:01 +0000 (0:00:00.069) 0:02:51.319 ***** 2026-02-05 02:54:29.140477 | orchestrator | 2026-02-05 02:54:29.140507 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 02:54:29.140520 | orchestrator | Thursday 05 February 2026 02:53:01 +0000 (0:00:00.086) 0:02:51.406 ***** 2026-02-05 02:54:29.140533 | orchestrator | 2026-02-05 02:54:29.140546 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 02:54:29.140558 | orchestrator | Thursday 05 February 2026 02:53:01 +0000 (0:00:00.068) 0:02:51.475 ***** 2026-02-05 02:54:29.140571 | orchestrator | 2026-02-05 02:54:29.140583 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-05 02:54:29.140596 | orchestrator | Thursday 05 February 2026 02:53:01 +0000 (0:00:00.073) 0:02:51.549 ***** 2026-02-05 02:54:29.140633 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:54:29.140646 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:54:29.140660 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:54:29.140672 | orchestrator | 2026-02-05 02:54:29.140685 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-05 02:54:29.140697 | orchestrator | Thursday 05 February 2026 02:53:23 +0000 (0:00:22.460) 0:03:14.009 ***** 2026-02-05 02:54:29.140710 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:54:29.140723 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:54:29.140736 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:54:29.140748 | orchestrator | 2026-02-05 02:54:29.140758 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 02:54:29.140771 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 02:54:29.140784 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-05 02:54:29.140795 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-05 02:54:29.140807 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 02:54:29.140818 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 02:54:29.140829 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 02:54:29.140840 | orchestrator | 2026-02-05 02:54:29.140851 | orchestrator | 2026-02-05 02:54:29.140862 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 02:54:29.140873 | orchestrator | Thursday 05 February 2026 02:54:28 +0000 (0:01:04.699) 0:04:18.708 ***** 2026-02-05 02:54:29.140884 | orchestrator | =============================================================================== 2026-02-05 02:54:29.140895 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 64.70s 2026-02-05 02:54:29.140906 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.34s 2026-02-05 02:54:29.140917 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.46s 2026-02-05 02:54:29.140947 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.07s 2026-02-05 02:54:29.140985 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.87s 2026-02-05 02:54:29.141011 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.09s 2026-02-05 02:54:29.141023 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.36s 2026-02-05 02:54:29.141052 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.91s 2026-02-05 02:54:29.141073 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.56s 2026-02-05 02:54:29.141085 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.50s 2026-02-05 02:54:29.141096 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.39s 2026-02-05 02:54:29.141107 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.38s 2026-02-05 02:54:29.141118 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.29s 2026-02-05 02:54:29.141129 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.83s 2026-02-05 02:54:29.141140 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.79s 2026-02-05 02:54:29.141151 | orchestrator | neutron : Copying over config.json files for services ------------------- 2.78s 2026-02-05 02:54:29.141172 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.78s 2026-02-05 02:54:29.141184 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.69s 2026-02-05 02:54:29.141195 | orchestrator | neutron : Creating Neutron database user and setting permissions -------- 2.55s 2026-02-05 02:54:29.141206 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 2.38s 2026-02-05 02:54:31.464317 | orchestrator | 2026-02-05 02:54:31 | INFO  | Task 048b1929-8769-4158-a8bd-f77724748ab7 (nova) was prepared for execution. 2026-02-05 02:54:31.464411 | orchestrator | 2026-02-05 02:54:31 | INFO  | It takes a moment until task 048b1929-8769-4158-a8bd-f77724748ab7 (nova) has been started and output is visible here. 2026-02-05 02:56:36.548315 | orchestrator | 2026-02-05 02:56:36.548457 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 02:56:36.548484 | orchestrator | 2026-02-05 02:56:36.548501 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-05 02:56:36.548517 | orchestrator | Thursday 05 February 2026 02:54:35 +0000 (0:00:00.281) 0:00:00.281 ***** 2026-02-05 02:56:36.548533 | orchestrator | changed: [testbed-manager] 2026-02-05 02:56:36.548552 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:56:36.548569 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:56:36.548585 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:56:36.548601 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:56:36.548617 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:56:36.548627 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:56:36.548637 | orchestrator | 2026-02-05 02:56:36.548647 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 02:56:36.548657 | orchestrator | Thursday 05 February 2026 02:54:36 +0000 (0:00:00.847) 0:00:01.128 ***** 2026-02-05 02:56:36.548666 | orchestrator | changed: [testbed-manager] 2026-02-05 02:56:36.548676 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:56:36.548686 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:56:36.548696 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:56:36.548705 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:56:36.548715 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:56:36.548725 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:56:36.548735 | orchestrator | 2026-02-05 02:56:36.548744 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 02:56:36.548754 | orchestrator | Thursday 05 February 2026 02:54:37 +0000 (0:00:00.884) 0:00:02.013 ***** 2026-02-05 02:56:36.548764 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-05 02:56:36.548774 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-05 02:56:36.548784 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-05 02:56:36.548794 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-05 02:56:36.548803 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-05 02:56:36.548813 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-05 02:56:36.548822 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-05 02:56:36.548832 | orchestrator | 2026-02-05 02:56:36.548842 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-05 02:56:36.548852 | orchestrator | 2026-02-05 02:56:36.548863 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-05 02:56:36.548875 | orchestrator | Thursday 05 February 2026 02:54:38 +0000 (0:00:00.729) 0:00:02.743 ***** 2026-02-05 02:56:36.548886 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:56:36.548897 | orchestrator | 2026-02-05 02:56:36.548908 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-05 02:56:36.548945 | orchestrator | Thursday 05 February 2026 02:54:39 +0000 (0:00:00.751) 0:00:03.494 ***** 2026-02-05 02:56:36.548960 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-05 02:56:36.548997 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-05 02:56:36.549008 | orchestrator | 2026-02-05 02:56:36.549019 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-05 02:56:36.549031 | orchestrator | Thursday 05 February 2026 02:54:43 +0000 (0:00:04.418) 0:00:07.913 ***** 2026-02-05 02:56:36.549043 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 02:56:36.549054 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 02:56:36.549065 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:56:36.549077 | orchestrator | 2026-02-05 02:56:36.549089 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-05 02:56:36.549100 | orchestrator | Thursday 05 February 2026 02:54:47 +0000 (0:00:04.427) 0:00:12.340 ***** 2026-02-05 02:56:36.549111 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:56:36.549123 | orchestrator | 2026-02-05 02:56:36.549134 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-05 02:56:36.549145 | orchestrator | Thursday 05 February 2026 02:54:48 +0000 (0:00:00.655) 0:00:12.995 ***** 2026-02-05 02:56:36.549157 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:56:36.549169 | orchestrator | 2026-02-05 02:56:36.549180 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-05 02:56:36.549191 | orchestrator | Thursday 05 February 2026 02:54:49 +0000 (0:00:01.335) 0:00:14.331 ***** 2026-02-05 02:56:36.549202 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:56:36.549213 | orchestrator | 2026-02-05 02:56:36.549222 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-05 02:56:36.549235 | orchestrator | Thursday 05 February 2026 02:54:52 +0000 (0:00:02.635) 0:00:16.966 ***** 2026-02-05 02:56:36.549252 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:56:36.549264 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:56:36.549280 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:56:36.549297 | orchestrator | 2026-02-05 02:56:36.549308 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-05 02:56:36.549317 | orchestrator | Thursday 05 February 2026 02:54:52 +0000 (0:00:00.289) 0:00:17.256 ***** 2026-02-05 02:56:36.549327 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:56:36.549337 | orchestrator | 2026-02-05 02:56:36.549346 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-05 02:56:36.549356 | orchestrator | Thursday 05 February 2026 02:55:26 +0000 (0:00:34.098) 0:00:51.354 ***** 2026-02-05 02:56:36.549365 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:56:36.549375 | orchestrator | 2026-02-05 02:56:36.549384 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-05 02:56:36.549393 | orchestrator | Thursday 05 February 2026 02:55:42 +0000 (0:00:15.807) 0:01:07.161 ***** 2026-02-05 02:56:36.549403 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:56:36.549412 | orchestrator | 2026-02-05 02:56:36.549421 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-05 02:56:36.549434 | orchestrator | Thursday 05 February 2026 02:55:55 +0000 (0:00:12.937) 0:01:20.099 ***** 2026-02-05 02:56:36.549472 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:56:36.549490 | orchestrator | 2026-02-05 02:56:36.549515 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-05 02:56:36.549531 | orchestrator | Thursday 05 February 2026 02:55:56 +0000 (0:00:00.699) 0:01:20.798 ***** 2026-02-05 02:56:36.549543 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:56:36.549552 | orchestrator | 2026-02-05 02:56:36.549562 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-05 02:56:36.549571 | orchestrator | Thursday 05 February 2026 02:55:56 +0000 (0:00:00.491) 0:01:21.290 ***** 2026-02-05 02:56:36.549582 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:56:36.549591 | orchestrator | 2026-02-05 02:56:36.549601 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-05 02:56:36.549619 | orchestrator | Thursday 05 February 2026 02:55:57 +0000 (0:00:00.694) 0:01:21.984 ***** 2026-02-05 02:56:36.549628 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:56:36.549638 | orchestrator | 2026-02-05 02:56:36.549647 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-05 02:56:36.549657 | orchestrator | Thursday 05 February 2026 02:56:16 +0000 (0:00:19.471) 0:01:41.456 ***** 2026-02-05 02:56:36.549666 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:56:36.549676 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:56:36.549685 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:56:36.549695 | orchestrator | 2026-02-05 02:56:36.549704 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-05 02:56:36.549714 | orchestrator | 2026-02-05 02:56:36.549724 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-05 02:56:36.549733 | orchestrator | Thursday 05 February 2026 02:56:17 +0000 (0:00:00.292) 0:01:41.749 ***** 2026-02-05 02:56:36.549742 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:56:36.549752 | orchestrator | 2026-02-05 02:56:36.549761 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-05 02:56:36.549771 | orchestrator | Thursday 05 February 2026 02:56:17 +0000 (0:00:00.658) 0:01:42.407 ***** 2026-02-05 02:56:36.549780 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:56:36.549790 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:56:36.549799 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:56:36.549809 | orchestrator | 2026-02-05 02:56:36.549818 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-05 02:56:36.549828 | orchestrator | Thursday 05 February 2026 02:56:20 +0000 (0:00:02.123) 0:01:44.530 ***** 2026-02-05 02:56:36.549837 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:56:36.549847 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:56:36.549856 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:56:36.549866 | orchestrator | 2026-02-05 02:56:36.549875 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-05 02:56:36.549885 | orchestrator | Thursday 05 February 2026 02:56:22 +0000 (0:00:02.263) 0:01:46.794 ***** 2026-02-05 02:56:36.549894 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:56:36.549904 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:56:36.549934 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:56:36.549945 | orchestrator | 2026-02-05 02:56:36.549955 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-05 02:56:36.549965 | orchestrator | Thursday 05 February 2026 02:56:22 +0000 (0:00:00.336) 0:01:47.130 ***** 2026-02-05 02:56:36.549974 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 02:56:36.549984 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:56:36.549993 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 02:56:36.550003 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:56:36.550012 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-05 02:56:36.550085 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-05 02:56:36.550095 | orchestrator | 2026-02-05 02:56:36.550104 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-05 02:56:36.550114 | orchestrator | Thursday 05 February 2026 02:56:31 +0000 (0:00:08.849) 0:01:55.980 ***** 2026-02-05 02:56:36.550154 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:56:36.550164 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:56:36.550173 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:56:36.550183 | orchestrator | 2026-02-05 02:56:36.550192 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-05 02:56:36.550202 | orchestrator | Thursday 05 February 2026 02:56:31 +0000 (0:00:00.344) 0:01:56.324 ***** 2026-02-05 02:56:36.550211 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-05 02:56:36.550221 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:56:36.550230 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 02:56:36.550248 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:56:36.550258 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 02:56:36.550268 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:56:36.550277 | orchestrator | 2026-02-05 02:56:36.550287 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-05 02:56:36.550296 | orchestrator | Thursday 05 February 2026 02:56:32 +0000 (0:00:00.942) 0:01:57.267 ***** 2026-02-05 02:56:36.550306 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:56:36.550315 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:56:36.550325 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:56:36.550334 | orchestrator | 2026-02-05 02:56:36.550344 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-05 02:56:36.550353 | orchestrator | Thursday 05 February 2026 02:56:33 +0000 (0:00:00.491) 0:01:57.758 ***** 2026-02-05 02:56:36.550363 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:56:36.550372 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:56:36.550382 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:56:36.550391 | orchestrator | 2026-02-05 02:56:36.550401 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-05 02:56:36.550410 | orchestrator | Thursday 05 February 2026 02:56:34 +0000 (0:00:01.030) 0:01:58.788 ***** 2026-02-05 02:56:36.550420 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:56:36.550432 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:56:36.550460 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:57:57.547429 | orchestrator | 2026-02-05 02:57:57.547550 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-05 02:57:57.547567 | orchestrator | Thursday 05 February 2026 02:56:36 +0000 (0:00:02.206) 0:02:00.995 ***** 2026-02-05 02:57:57.547580 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:57:57.547592 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:57:57.547603 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:57:57.547615 | orchestrator | 2026-02-05 02:57:57.547626 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-05 02:57:57.547638 | orchestrator | Thursday 05 February 2026 02:56:59 +0000 (0:00:22.572) 0:02:23.568 ***** 2026-02-05 02:57:57.547649 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:57:57.547660 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:57:57.547671 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:57:57.547681 | orchestrator | 2026-02-05 02:57:57.547693 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-05 02:57:57.547704 | orchestrator | Thursday 05 February 2026 02:57:11 +0000 (0:00:12.565) 0:02:36.134 ***** 2026-02-05 02:57:57.547715 | orchestrator | ok: [testbed-node-0] 2026-02-05 02:57:57.547726 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:57:57.547736 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:57:57.547748 | orchestrator | 2026-02-05 02:57:57.547759 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-05 02:57:57.547769 | orchestrator | Thursday 05 February 2026 02:57:12 +0000 (0:00:00.864) 0:02:36.998 ***** 2026-02-05 02:57:57.547780 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:57:57.547792 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:57:57.547803 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:57:57.547814 | orchestrator | 2026-02-05 02:57:57.547825 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-05 02:57:57.547836 | orchestrator | Thursday 05 February 2026 02:57:25 +0000 (0:00:12.999) 0:02:49.998 ***** 2026-02-05 02:57:57.547847 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:57:57.547858 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:57:57.547869 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:57:57.547880 | orchestrator | 2026-02-05 02:57:57.547891 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-05 02:57:57.547902 | orchestrator | Thursday 05 February 2026 02:57:26 +0000 (0:00:01.201) 0:02:51.199 ***** 2026-02-05 02:57:57.547962 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:57:57.547977 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:57:57.547990 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:57:57.548003 | orchestrator | 2026-02-05 02:57:57.548015 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-05 02:57:57.548028 | orchestrator | 2026-02-05 02:57:57.548041 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-05 02:57:57.548054 | orchestrator | Thursday 05 February 2026 02:57:27 +0000 (0:00:00.305) 0:02:51.505 ***** 2026-02-05 02:57:57.548066 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:57:57.548080 | orchestrator | 2026-02-05 02:57:57.548093 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-05 02:57:57.548105 | orchestrator | Thursday 05 February 2026 02:57:27 +0000 (0:00:00.630) 0:02:52.136 ***** 2026-02-05 02:57:57.548118 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-05 02:57:57.548131 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-05 02:57:57.548144 | orchestrator | 2026-02-05 02:57:57.548156 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-05 02:57:57.548168 | orchestrator | Thursday 05 February 2026 02:57:31 +0000 (0:00:03.393) 0:02:55.529 ***** 2026-02-05 02:57:57.548180 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-05 02:57:57.548297 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-05 02:57:57.548319 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-05 02:57:57.548331 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-05 02:57:57.548343 | orchestrator | 2026-02-05 02:57:57.548354 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-05 02:57:57.548365 | orchestrator | Thursday 05 February 2026 02:57:37 +0000 (0:00:06.798) 0:03:02.328 ***** 2026-02-05 02:57:57.548376 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 02:57:57.548386 | orchestrator | 2026-02-05 02:57:57.548397 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-05 02:57:57.548408 | orchestrator | Thursday 05 February 2026 02:57:41 +0000 (0:00:03.329) 0:03:05.657 ***** 2026-02-05 02:57:57.548418 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 02:57:57.548429 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-05 02:57:57.548440 | orchestrator | 2026-02-05 02:57:57.548451 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-05 02:57:57.548462 | orchestrator | Thursday 05 February 2026 02:57:45 +0000 (0:00:04.100) 0:03:09.758 ***** 2026-02-05 02:57:57.548473 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 02:57:57.548483 | orchestrator | 2026-02-05 02:57:57.548494 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-05 02:57:57.548505 | orchestrator | Thursday 05 February 2026 02:57:48 +0000 (0:00:03.264) 0:03:13.022 ***** 2026-02-05 02:57:57.548515 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-05 02:57:57.548526 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-05 02:57:57.548537 | orchestrator | 2026-02-05 02:57:57.548548 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-05 02:57:57.548583 | orchestrator | Thursday 05 February 2026 02:57:55 +0000 (0:00:07.378) 0:03:20.401 ***** 2026-02-05 02:57:57.548601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:57:57.548630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:57:57.548644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:57:57.548670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:01.974783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:01.974890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:01.974907 | orchestrator | 2026-02-05 02:58:01.974994 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-05 02:58:01.975009 | orchestrator | Thursday 05 February 2026 02:57:57 +0000 (0:00:01.597) 0:03:21.998 ***** 2026-02-05 02:58:01.975020 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:58:01.975033 | orchestrator | 2026-02-05 02:58:01.975045 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-05 02:58:01.975056 | orchestrator | Thursday 05 February 2026 02:57:57 +0000 (0:00:00.133) 0:03:22.132 ***** 2026-02-05 02:58:01.975067 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:58:01.975078 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:58:01.975089 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:58:01.975100 | orchestrator | 2026-02-05 02:58:01.975112 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-05 02:58:01.975123 | orchestrator | Thursday 05 February 2026 02:57:57 +0000 (0:00:00.317) 0:03:22.449 ***** 2026-02-05 02:58:01.975134 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 02:58:01.975144 | orchestrator | 2026-02-05 02:58:01.975155 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-05 02:58:01.975166 | orchestrator | Thursday 05 February 2026 02:57:58 +0000 (0:00:00.678) 0:03:23.128 ***** 2026-02-05 02:58:01.975177 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:58:01.975188 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:58:01.975199 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:58:01.975210 | orchestrator | 2026-02-05 02:58:01.975221 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-05 02:58:01.975232 | orchestrator | Thursday 05 February 2026 02:57:58 +0000 (0:00:00.289) 0:03:23.418 ***** 2026-02-05 02:58:01.975243 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:58:01.975255 | orchestrator | 2026-02-05 02:58:01.975267 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-05 02:58:01.975278 | orchestrator | Thursday 05 February 2026 02:57:59 +0000 (0:00:00.769) 0:03:24.188 ***** 2026-02-05 02:58:01.975293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:58:01.975371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:58:01.975391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:58:01.975405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:01.975420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:01.975445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:01.975459 | orchestrator | 2026-02-05 02:58:01.975478 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-05 02:58:03.727213 | orchestrator | Thursday 05 February 2026 02:58:01 +0000 (0:00:02.238) 0:03:26.427 ***** 2026-02-05 02:58:03.727333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 02:58:03.727356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:58:03.727370 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:58:03.727384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 02:58:03.727422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:58:03.727449 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:58:03.727482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 02:58:03.727495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:58:03.727507 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:58:03.727519 | orchestrator | 2026-02-05 02:58:03.727531 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-05 02:58:03.727542 | orchestrator | Thursday 05 February 2026 02:58:02 +0000 (0:00:00.683) 0:03:27.110 ***** 2026-02-05 02:58:03.727554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 02:58:03.727574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:58:03.727586 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:58:03.727612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 02:58:06.016891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:58:06.017063 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:58:06.017085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 02:58:06.017238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:58:06.017255 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:58:06.017268 | orchestrator | 2026-02-05 02:58:06.017280 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-05 02:58:06.017294 | orchestrator | Thursday 05 February 2026 02:58:03 +0000 (0:00:01.067) 0:03:28.177 ***** 2026-02-05 02:58:06.017323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:58:06.017368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:58:06.017411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:58:06.017449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:06.017474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:06.017498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:12.056481 | orchestrator | 2026-02-05 02:58:12.056624 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-05 02:58:12.056639 | orchestrator | Thursday 05 February 2026 02:58:06 +0000 (0:00:02.288) 0:03:30.465 ***** 2026-02-05 02:58:12.056656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:58:12.056703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:58:12.056733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:58:12.056767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:12.056781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:12.056799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:12.056810 | orchestrator | 2026-02-05 02:58:12.056820 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-05 02:58:12.056830 | orchestrator | Thursday 05 February 2026 02:58:11 +0000 (0:00:05.189) 0:03:35.655 ***** 2026-02-05 02:58:12.056846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 02:58:12.056857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:58:12.056868 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:58:12.056892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 02:58:16.237736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:58:16.237860 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:58:16.237879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 02:58:16.237913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 02:58:16.238006 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:58:16.238082 | orchestrator | 2026-02-05 02:58:16.238097 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-05 02:58:16.238118 | orchestrator | Thursday 05 February 2026 02:58:12 +0000 (0:00:00.857) 0:03:36.512 ***** 2026-02-05 02:58:16.238127 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:58:16.238136 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:58:16.238175 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:58:16.238185 | orchestrator | 2026-02-05 02:58:16.238194 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-05 02:58:16.238203 | orchestrator | Thursday 05 February 2026 02:58:13 +0000 (0:00:01.505) 0:03:38.018 ***** 2026-02-05 02:58:16.238212 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:58:16.238221 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:58:16.238230 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:58:16.238239 | orchestrator | 2026-02-05 02:58:16.238249 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-05 02:58:16.238260 | orchestrator | Thursday 05 February 2026 02:58:13 +0000 (0:00:00.301) 0:03:38.319 ***** 2026-02-05 02:58:16.238296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:58:16.238339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:58:16.238359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 02:58:16.238371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:16.238392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:16.238410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:57.389565 | orchestrator | 2026-02-05 02:58:57.389691 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-05 02:58:57.389711 | orchestrator | Thursday 05 February 2026 02:58:15 +0000 (0:00:01.706) 0:03:40.026 ***** 2026-02-05 02:58:57.389726 | orchestrator | 2026-02-05 02:58:57.389741 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-05 02:58:57.389755 | orchestrator | Thursday 05 February 2026 02:58:15 +0000 (0:00:00.361) 0:03:40.387 ***** 2026-02-05 02:58:57.389768 | orchestrator | 2026-02-05 02:58:57.389782 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-05 02:58:57.389796 | orchestrator | Thursday 05 February 2026 02:58:16 +0000 (0:00:00.153) 0:03:40.541 ***** 2026-02-05 02:58:57.389809 | orchestrator | 2026-02-05 02:58:57.389823 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-05 02:58:57.389837 | orchestrator | Thursday 05 February 2026 02:58:16 +0000 (0:00:00.142) 0:03:40.684 ***** 2026-02-05 02:58:57.389851 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:58:57.389866 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:58:57.389880 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:58:57.389893 | orchestrator | 2026-02-05 02:58:57.389907 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-05 02:58:57.389942 | orchestrator | Thursday 05 February 2026 02:58:33 +0000 (0:00:17.219) 0:03:57.903 ***** 2026-02-05 02:58:57.389958 | orchestrator | changed: [testbed-node-0] 2026-02-05 02:58:57.389973 | orchestrator | changed: [testbed-node-1] 2026-02-05 02:58:57.389987 | orchestrator | changed: [testbed-node-2] 2026-02-05 02:58:57.390001 | orchestrator | 2026-02-05 02:58:57.390080 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-05 02:58:57.390096 | orchestrator | 2026-02-05 02:58:57.390110 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 02:58:57.390125 | orchestrator | Thursday 05 February 2026 02:58:43 +0000 (0:00:10.406) 0:04:08.310 ***** 2026-02-05 02:58:57.390141 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:58:57.390157 | orchestrator | 2026-02-05 02:58:57.390172 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 02:58:57.390203 | orchestrator | Thursday 05 February 2026 02:58:44 +0000 (0:00:01.042) 0:04:09.352 ***** 2026-02-05 02:58:57.390219 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:58:57.390234 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:58:57.390249 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:58:57.390291 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:58:57.390307 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:58:57.390322 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:58:57.390336 | orchestrator | 2026-02-05 02:58:57.390352 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-05 02:58:57.390367 | orchestrator | Thursday 05 February 2026 02:58:45 +0000 (0:00:00.548) 0:04:09.900 ***** 2026-02-05 02:58:57.390382 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:58:57.390397 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:58:57.390412 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:58:57.390425 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:58:57.390439 | orchestrator | 2026-02-05 02:58:57.390452 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-05 02:58:57.390465 | orchestrator | Thursday 05 February 2026 02:58:46 +0000 (0:00:00.995) 0:04:10.896 ***** 2026-02-05 02:58:57.390479 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-05 02:58:57.390492 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-05 02:58:57.390504 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-05 02:58:57.390517 | orchestrator | 2026-02-05 02:58:57.390530 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-05 02:58:57.390544 | orchestrator | Thursday 05 February 2026 02:58:47 +0000 (0:00:00.671) 0:04:11.568 ***** 2026-02-05 02:58:57.390557 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-05 02:58:57.390570 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-05 02:58:57.390583 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-05 02:58:57.390596 | orchestrator | 2026-02-05 02:58:57.390608 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-05 02:58:57.390621 | orchestrator | Thursday 05 February 2026 02:58:48 +0000 (0:00:01.203) 0:04:12.771 ***** 2026-02-05 02:58:57.390633 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-05 02:58:57.390647 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:58:57.390660 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-05 02:58:57.390673 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:58:57.390686 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-05 02:58:57.390700 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:58:57.390713 | orchestrator | 2026-02-05 02:58:57.390726 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-05 02:58:57.390740 | orchestrator | Thursday 05 February 2026 02:58:49 +0000 (0:00:00.758) 0:04:13.529 ***** 2026-02-05 02:58:57.390753 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 02:58:57.390767 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 02:58:57.390780 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:58:57.390793 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 02:58:57.390806 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 02:58:57.390819 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:58:57.390832 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 02:58:57.390846 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 02:58:57.390878 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:58:57.390892 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-05 02:58:57.390906 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-05 02:58:57.390953 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-05 02:58:57.390969 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-05 02:58:57.390993 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-05 02:58:57.391007 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-05 02:58:57.391020 | orchestrator | 2026-02-05 02:58:57.391033 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-05 02:58:57.391046 | orchestrator | Thursday 05 February 2026 02:58:51 +0000 (0:00:02.128) 0:04:15.657 ***** 2026-02-05 02:58:57.391060 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:58:57.391073 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:58:57.391086 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:58:57.391100 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:58:57.391113 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:58:57.391123 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:58:57.391134 | orchestrator | 2026-02-05 02:58:57.391145 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-05 02:58:57.391156 | orchestrator | Thursday 05 February 2026 02:58:52 +0000 (0:00:01.445) 0:04:17.103 ***** 2026-02-05 02:58:57.391168 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:58:57.391180 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:58:57.391191 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:58:57.391203 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:58:57.391214 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:58:57.391226 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:58:57.391237 | orchestrator | 2026-02-05 02:58:57.391249 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-05 02:58:57.391260 | orchestrator | Thursday 05 February 2026 02:58:55 +0000 (0:00:02.791) 0:04:19.894 ***** 2026-02-05 02:58:57.391282 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 02:58:57.391298 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 02:58:57.391320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 02:58:58.756807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 02:58:58.756948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 02:58:58.756984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 02:58:58.756996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 02:58:58.757005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 02:58:58.757014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 02:58:58.757061 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:58.757073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:58.757098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:58.757109 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:58.757128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:58.757137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 02:58:58.757153 | orchestrator | 2026-02-05 02:58:58.757164 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 02:58:58.757175 | orchestrator | Thursday 05 February 2026 02:58:57 +0000 (0:00:02.094) 0:04:21.988 ***** 2026-02-05 02:58:58.757185 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 02:58:58.757195 | orchestrator | 2026-02-05 02:58:58.757204 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-05 02:58:58.757219 | orchestrator | Thursday 05 February 2026 02:58:58 +0000 (0:00:01.215) 0:04:23.204 ***** 2026-02-05 02:59:02.134192 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 02:59:02.134312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 02:59:02.134331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 02:59:02.134344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 02:59:02.134378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 02:59:02.134410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 02:59:02.134424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 02:59:02.134442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 02:59:02.134453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 02:59:02.134464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:02.134483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:02.134498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:02.134516 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:03.806880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:03.807039 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:03.807053 | orchestrator | 2026-02-05 02:59:03.807062 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-05 02:59:03.807070 | orchestrator | Thursday 05 February 2026 02:59:02 +0000 (0:00:03.874) 0:04:27.078 ***** 2026-02-05 02:59:03.807079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 02:59:03.807102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 02:59:03.807135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 02:59:03.807155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 02:59:03.807163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 02:59:03.807170 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:59:03.807179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 02:59:03.807192 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:59:03.807198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 02:59:03.807205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 02:59:03.807218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 02:59:05.533665 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:59:05.533785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 02:59:05.533806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:59:05.533842 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:05.533854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 02:59:05.533866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 02:59:05.533878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:59:05.533890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:59:05.533901 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:05.533912 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:05.534009 | orchestrator | 2026-02-05 02:59:05.534130 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-05 02:59:05.534153 | orchestrator | Thursday 05 February 2026 02:59:03 +0000 (0:00:01.374) 0:04:28.452 ***** 2026-02-05 02:59:05.534203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 02:59:05.534241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 02:59:05.534265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 02:59:05.534287 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:59:05.534304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 02:59:05.534318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 02:59:05.534348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 02:59:12.791499 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:59:12.791616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 02:59:12.791662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 02:59:12.791678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 02:59:12.791691 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:59:12.791704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 02:59:12.791717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:59:12.791728 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:12.791773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 02:59:12.791794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:59:12.791805 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:12.791817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 02:59:12.791828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 02:59:12.791840 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:12.791852 | orchestrator | 2026-02-05 02:59:12.791864 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 02:59:12.791877 | orchestrator | Thursday 05 February 2026 02:59:06 +0000 (0:00:02.237) 0:04:30.690 ***** 2026-02-05 02:59:12.791888 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:12.791899 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:12.791910 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:12.791954 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 02:59:12.791966 | orchestrator | 2026-02-05 02:59:12.791978 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-05 02:59:12.791989 | orchestrator | Thursday 05 February 2026 02:59:07 +0000 (0:00:01.081) 0:04:31.772 ***** 2026-02-05 02:59:12.792000 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 02:59:12.792011 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 02:59:12.792023 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 02:59:12.792036 | orchestrator | 2026-02-05 02:59:12.792049 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-05 02:59:12.792062 | orchestrator | Thursday 05 February 2026 02:59:08 +0000 (0:00:00.935) 0:04:32.708 ***** 2026-02-05 02:59:12.792075 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 02:59:12.792087 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 02:59:12.792100 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 02:59:12.792113 | orchestrator | 2026-02-05 02:59:12.792126 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-05 02:59:12.792139 | orchestrator | Thursday 05 February 2026 02:59:09 +0000 (0:00:00.899) 0:04:33.607 ***** 2026-02-05 02:59:12.792159 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:59:12.792178 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:59:12.792196 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:59:12.792215 | orchestrator | 2026-02-05 02:59:12.792234 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-05 02:59:12.792254 | orchestrator | Thursday 05 February 2026 02:59:09 +0000 (0:00:00.730) 0:04:34.337 ***** 2026-02-05 02:59:12.792273 | orchestrator | ok: [testbed-node-3] 2026-02-05 02:59:12.792287 | orchestrator | ok: [testbed-node-4] 2026-02-05 02:59:12.792300 | orchestrator | ok: [testbed-node-5] 2026-02-05 02:59:12.792312 | orchestrator | 2026-02-05 02:59:12.792325 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-05 02:59:12.792338 | orchestrator | Thursday 05 February 2026 02:59:10 +0000 (0:00:00.511) 0:04:34.849 ***** 2026-02-05 02:59:12.792350 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-05 02:59:12.792364 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-05 02:59:12.792376 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-05 02:59:12.792389 | orchestrator | 2026-02-05 02:59:12.792400 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-05 02:59:12.792410 | orchestrator | Thursday 05 February 2026 02:59:11 +0000 (0:00:01.201) 0:04:36.050 ***** 2026-02-05 02:59:12.792436 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-05 02:59:31.050314 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-05 02:59:31.050451 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-05 02:59:31.050471 | orchestrator | 2026-02-05 02:59:31.050485 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-05 02:59:31.050498 | orchestrator | Thursday 05 February 2026 02:59:12 +0000 (0:00:01.197) 0:04:37.247 ***** 2026-02-05 02:59:31.050509 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-05 02:59:31.050521 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-05 02:59:31.050532 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-05 02:59:31.050544 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-05 02:59:31.050555 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-05 02:59:31.050566 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-05 02:59:31.050577 | orchestrator | 2026-02-05 02:59:31.050588 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-05 02:59:31.050599 | orchestrator | Thursday 05 February 2026 02:59:16 +0000 (0:00:04.063) 0:04:41.311 ***** 2026-02-05 02:59:31.050610 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:59:31.050623 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:59:31.050634 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:59:31.050645 | orchestrator | 2026-02-05 02:59:31.050657 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-05 02:59:31.050668 | orchestrator | Thursday 05 February 2026 02:59:17 +0000 (0:00:00.329) 0:04:41.641 ***** 2026-02-05 02:59:31.050679 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:59:31.050690 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:59:31.050701 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:59:31.050712 | orchestrator | 2026-02-05 02:59:31.050724 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-05 02:59:31.050735 | orchestrator | Thursday 05 February 2026 02:59:17 +0000 (0:00:00.325) 0:04:41.967 ***** 2026-02-05 02:59:31.050747 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:59:31.050758 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:59:31.050769 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:59:31.050780 | orchestrator | 2026-02-05 02:59:31.050791 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-05 02:59:31.050802 | orchestrator | Thursday 05 February 2026 02:59:19 +0000 (0:00:01.505) 0:04:43.472 ***** 2026-02-05 02:59:31.050814 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-05 02:59:31.050854 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-05 02:59:31.050868 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-05 02:59:31.050882 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-05 02:59:31.050896 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-05 02:59:31.050910 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-05 02:59:31.050996 | orchestrator | 2026-02-05 02:59:31.051011 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-05 02:59:31.051023 | orchestrator | Thursday 05 February 2026 02:59:22 +0000 (0:00:03.339) 0:04:46.811 ***** 2026-02-05 02:59:31.051036 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 02:59:31.051050 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 02:59:31.051063 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 02:59:31.051076 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 02:59:31.051088 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:59:31.051102 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 02:59:31.051115 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:59:31.051128 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 02:59:31.051140 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:59:31.051153 | orchestrator | 2026-02-05 02:59:31.051167 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-05 02:59:31.051179 | orchestrator | Thursday 05 February 2026 02:59:25 +0000 (0:00:03.044) 0:04:49.856 ***** 2026-02-05 02:59:31.051193 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:59:31.051206 | orchestrator | 2026-02-05 02:59:31.051219 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-05 02:59:31.051231 | orchestrator | Thursday 05 February 2026 02:59:25 +0000 (0:00:00.130) 0:04:49.987 ***** 2026-02-05 02:59:31.051241 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:59:31.051253 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:59:31.051263 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:59:31.051274 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:31.051290 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:31.051308 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:31.051326 | orchestrator | 2026-02-05 02:59:31.051359 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-05 02:59:31.051380 | orchestrator | Thursday 05 February 2026 02:59:26 +0000 (0:00:00.807) 0:04:50.794 ***** 2026-02-05 02:59:31.051398 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 02:59:31.051414 | orchestrator | 2026-02-05 02:59:31.051425 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-05 02:59:31.051436 | orchestrator | Thursday 05 February 2026 02:59:27 +0000 (0:00:00.674) 0:04:51.469 ***** 2026-02-05 02:59:31.051464 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:59:31.051497 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:59:31.051509 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:59:31.051520 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:31.051531 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:31.051542 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:31.051553 | orchestrator | 2026-02-05 02:59:31.051564 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-05 02:59:31.051575 | orchestrator | Thursday 05 February 2026 02:59:27 +0000 (0:00:00.581) 0:04:52.051 ***** 2026-02-05 02:59:31.051600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 02:59:31.051615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 02:59:31.051627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 02:59:31.051639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 02:59:31.051666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 02:59:35.816714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 02:59:35.816826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 02:59:35.816844 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 02:59:35.816856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 02:59:35.816873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:35.816892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:35.817029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:35.817088 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:35.817110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:35.817130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:35.817151 | orchestrator | 2026-02-05 02:59:35.817173 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-05 02:59:35.817196 | orchestrator | Thursday 05 February 2026 02:59:31 +0000 (0:00:04.026) 0:04:56.077 ***** 2026-02-05 02:59:35.817219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 02:59:35.817242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 02:59:35.817276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 02:59:37.865864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 02:59:37.866144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 02:59:37.866180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 02:59:37.866204 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:37.866275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 02:59:37.866324 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:37.866347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 02:59:37.866369 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:37.866392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 02:59:37.866413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:37.866450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:37.866465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 02:59:37.866479 | orchestrator | 2026-02-05 02:59:37.866495 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-05 02:59:37.866526 | orchestrator | Thursday 05 February 2026 02:59:37 +0000 (0:00:06.241) 0:05:02.319 ***** 2026-02-05 02:59:58.907185 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:59:58.907319 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:59:58.907342 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:59:58.907358 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:58.907375 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:58.907391 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:58.907407 | orchestrator | 2026-02-05 02:59:58.907426 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-05 02:59:58.907444 | orchestrator | Thursday 05 February 2026 02:59:39 +0000 (0:00:01.508) 0:05:03.828 ***** 2026-02-05 02:59:58.907461 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-05 02:59:58.907480 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-05 02:59:58.907497 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-05 02:59:58.907513 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-05 02:59:58.907530 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-05 02:59:58.907547 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-05 02:59:58.907563 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-05 02:59:58.907581 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:58.907598 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-05 02:59:58.907615 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:58.907632 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-05 02:59:58.907648 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:58.907662 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-05 02:59:58.907678 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-05 02:59:58.907726 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-05 02:59:58.907744 | orchestrator | 2026-02-05 02:59:58.907763 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-05 02:59:58.907782 | orchestrator | Thursday 05 February 2026 02:59:42 +0000 (0:00:03.443) 0:05:07.271 ***** 2026-02-05 02:59:58.907799 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:59:58.907816 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:59:58.907833 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:59:58.907850 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:58.907867 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:58.907901 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:58.907993 | orchestrator | 2026-02-05 02:59:58.908015 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-05 02:59:58.908032 | orchestrator | Thursday 05 February 2026 02:59:43 +0000 (0:00:00.829) 0:05:08.100 ***** 2026-02-05 02:59:58.908048 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-05 02:59:58.908066 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-05 02:59:58.908083 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-05 02:59:58.908099 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-05 02:59:58.908116 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-05 02:59:58.908132 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-05 02:59:58.908166 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-05 02:59:58.908184 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-05 02:59:58.908201 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-05 02:59:58.908218 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:58.908234 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-05 02:59:58.908250 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-05 02:59:58.908266 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:58.908283 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-05 02:59:58.908298 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:58.908315 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-05 02:59:58.908331 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-05 02:59:58.908371 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-05 02:59:58.908390 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-05 02:59:58.908407 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-05 02:59:58.908423 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-05 02:59:58.908439 | orchestrator | 2026-02-05 02:59:58.908455 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-05 02:59:58.908473 | orchestrator | Thursday 05 February 2026 02:59:48 +0000 (0:00:04.878) 0:05:12.979 ***** 2026-02-05 02:59:58.908585 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 02:59:58.908603 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 02:59:58.908620 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 02:59:58.908636 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-05 02:59:58.908652 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-05 02:59:58.908668 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-05 02:59:58.908684 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 02:59:58.908699 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 02:59:58.908715 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 02:59:58.908731 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 02:59:58.908747 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 02:59:58.908763 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 02:59:58.908780 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-05 02:59:58.908796 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:58.908812 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-05 02:59:58.908828 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:58.908845 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-05 02:59:58.908862 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:58.908879 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 02:59:58.908895 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 02:59:58.908911 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 02:59:58.908955 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 02:59:58.908972 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 02:59:58.908988 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 02:59:58.909003 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 02:59:58.909019 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 02:59:58.909034 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 02:59:58.909051 | orchestrator | 2026-02-05 02:59:58.909088 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-05 02:59:58.909105 | orchestrator | Thursday 05 February 2026 02:59:55 +0000 (0:00:07.083) 0:05:20.062 ***** 2026-02-05 02:59:58.909122 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:59:58.909138 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:59:58.909154 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:59:58.909168 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:58.909184 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:58.909201 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:58.909218 | orchestrator | 2026-02-05 02:59:58.909235 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-05 02:59:58.909251 | orchestrator | Thursday 05 February 2026 02:59:56 +0000 (0:00:00.610) 0:05:20.673 ***** 2026-02-05 02:59:58.909266 | orchestrator | skipping: [testbed-node-3] 2026-02-05 02:59:58.909291 | orchestrator | skipping: [testbed-node-4] 2026-02-05 02:59:58.909305 | orchestrator | skipping: [testbed-node-5] 2026-02-05 02:59:58.909318 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:58.909331 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:58.909345 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:58.909412 | orchestrator | 2026-02-05 02:59:58.909428 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-05 02:59:58.909442 | orchestrator | Thursday 05 February 2026 02:59:57 +0000 (0:00:00.825) 0:05:21.498 ***** 2026-02-05 02:59:58.909456 | orchestrator | skipping: [testbed-node-0] 2026-02-05 02:59:58.909471 | orchestrator | skipping: [testbed-node-1] 2026-02-05 02:59:58.909485 | orchestrator | skipping: [testbed-node-2] 2026-02-05 02:59:58.909499 | orchestrator | changed: [testbed-node-3] 2026-02-05 02:59:58.909514 | orchestrator | changed: [testbed-node-4] 2026-02-05 02:59:58.909528 | orchestrator | changed: [testbed-node-5] 2026-02-05 02:59:58.909542 | orchestrator | 2026-02-05 02:59:58.909571 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-05 03:00:00.616592 | orchestrator | Thursday 05 February 2026 02:59:58 +0000 (0:00:01.854) 0:05:23.353 ***** 2026-02-05 03:00:00.616663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 03:00:00.616673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 03:00:00.616679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 03:00:00.616700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 03:00:00.616727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 03:00:00.616735 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:00:00.616756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 03:00:00.616763 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:00:00.616770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 03:00:00.616776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 03:00:00.616783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 03:00:00.616798 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:00:00.616806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 03:00:00.616819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 03:00:04.026743 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:00:04.026882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 03:00:04.026899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 03:00:04.026958 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:00:04.026968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 03:00:04.026977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 03:00:04.027006 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:00:04.027014 | orchestrator | 2026-02-05 03:00:04.027023 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-05 03:00:04.027032 | orchestrator | Thursday 05 February 2026 03:00:00 +0000 (0:00:01.707) 0:05:25.061 ***** 2026-02-05 03:00:04.027040 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-05 03:00:04.027048 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-05 03:00:04.027068 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:00:04.027076 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-05 03:00:04.027084 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-05 03:00:04.027091 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:00:04.027099 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-05 03:00:04.027106 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-05 03:00:04.027114 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:00:04.027121 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-05 03:00:04.027128 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-05 03:00:04.027136 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:00:04.027143 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-05 03:00:04.027150 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-05 03:00:04.027158 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:00:04.027165 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-05 03:00:04.027173 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-05 03:00:04.027180 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:00:04.027187 | orchestrator | 2026-02-05 03:00:04.027195 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-05 03:00:04.027203 | orchestrator | Thursday 05 February 2026 03:00:01 +0000 (0:00:00.693) 0:05:25.755 ***** 2026-02-05 03:00:04.027227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 03:00:04.027237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 03:00:04.027250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 03:00:04.027263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 03:00:04.027274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 03:00:04.027290 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 03:00:56.420305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 03:00:56.420389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 03:00:56.420418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 03:00:56.420427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:00:56.420446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 03:00:56.420456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 03:00:56.420475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 03:00:56.420483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:00:56.420495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:00:56.420502 | orchestrator | 2026-02-05 03:00:56.420512 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 03:00:56.420520 | orchestrator | Thursday 05 February 2026 03:00:04 +0000 (0:00:02.952) 0:05:28.707 ***** 2026-02-05 03:00:56.420528 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:00:56.420536 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:00:56.420542 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:00:56.420549 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:00:56.420556 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:00:56.420562 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:00:56.420569 | orchestrator | 2026-02-05 03:00:56.420576 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 03:00:56.420583 | orchestrator | Thursday 05 February 2026 03:00:04 +0000 (0:00:00.607) 0:05:29.314 ***** 2026-02-05 03:00:56.420589 | orchestrator | 2026-02-05 03:00:56.420596 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 03:00:56.420603 | orchestrator | Thursday 05 February 2026 03:00:04 +0000 (0:00:00.139) 0:05:29.453 ***** 2026-02-05 03:00:56.420609 | orchestrator | 2026-02-05 03:00:56.420617 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 03:00:56.420627 | orchestrator | Thursday 05 February 2026 03:00:05 +0000 (0:00:00.310) 0:05:29.764 ***** 2026-02-05 03:00:56.420634 | orchestrator | 2026-02-05 03:00:56.420641 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 03:00:56.420647 | orchestrator | Thursday 05 February 2026 03:00:05 +0000 (0:00:00.138) 0:05:29.902 ***** 2026-02-05 03:00:56.420654 | orchestrator | 2026-02-05 03:00:56.420661 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 03:00:56.420668 | orchestrator | Thursday 05 February 2026 03:00:05 +0000 (0:00:00.136) 0:05:30.038 ***** 2026-02-05 03:00:56.420674 | orchestrator | 2026-02-05 03:00:56.420681 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 03:00:56.420688 | orchestrator | Thursday 05 February 2026 03:00:05 +0000 (0:00:00.137) 0:05:30.175 ***** 2026-02-05 03:00:56.420695 | orchestrator | 2026-02-05 03:00:56.420702 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-05 03:00:56.420708 | orchestrator | Thursday 05 February 2026 03:00:05 +0000 (0:00:00.138) 0:05:30.313 ***** 2026-02-05 03:00:56.420715 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:00:56.420722 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:00:56.420729 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:00:56.420736 | orchestrator | 2026-02-05 03:00:56.420742 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-05 03:00:56.420749 | orchestrator | Thursday 05 February 2026 03:00:12 +0000 (0:00:06.962) 0:05:37.276 ***** 2026-02-05 03:00:56.420756 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:00:56.420763 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:00:56.420769 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:00:56.420776 | orchestrator | 2026-02-05 03:00:56.420783 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-05 03:00:56.420803 | orchestrator | Thursday 05 February 2026 03:00:30 +0000 (0:00:18.068) 0:05:55.345 ***** 2026-02-05 03:00:56.420809 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:00:56.420816 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:00:56.420823 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:00:56.420830 | orchestrator | 2026-02-05 03:00:56.420841 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-05 03:03:15.120765 | orchestrator | Thursday 05 February 2026 03:00:56 +0000 (0:00:25.521) 0:06:20.866 ***** 2026-02-05 03:03:15.120878 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:03:15.120895 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:03:15.120907 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:03:15.120918 | orchestrator | 2026-02-05 03:03:15.120995 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-05 03:03:15.121008 | orchestrator | Thursday 05 February 2026 03:01:33 +0000 (0:00:37.181) 0:06:58.047 ***** 2026-02-05 03:03:15.121019 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-02-05 03:03:15.121032 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-02-05 03:03:15.121043 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-02-05 03:03:15.121054 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:03:15.121065 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:03:15.121076 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:03:15.121087 | orchestrator | 2026-02-05 03:03:15.121098 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-05 03:03:15.121109 | orchestrator | Thursday 05 February 2026 03:01:39 +0000 (0:00:06.384) 0:07:04.432 ***** 2026-02-05 03:03:15.121120 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:03:15.121131 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:03:15.121142 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:03:15.121153 | orchestrator | 2026-02-05 03:03:15.121164 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-05 03:03:15.121175 | orchestrator | Thursday 05 February 2026 03:01:40 +0000 (0:00:00.823) 0:07:05.256 ***** 2026-02-05 03:03:15.121186 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:03:15.121197 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:03:15.121208 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:03:15.121219 | orchestrator | 2026-02-05 03:03:15.121230 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-05 03:03:15.121242 | orchestrator | Thursday 05 February 2026 03:02:05 +0000 (0:00:24.726) 0:07:29.983 ***** 2026-02-05 03:03:15.121252 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:03:15.121263 | orchestrator | 2026-02-05 03:03:15.121275 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-05 03:03:15.121286 | orchestrator | Thursday 05 February 2026 03:02:05 +0000 (0:00:00.129) 0:07:30.112 ***** 2026-02-05 03:03:15.121296 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:03:15.121310 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:15.121323 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:03:15.121335 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:15.121428 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:15.121442 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-05 03:03:15.121458 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 03:03:15.121471 | orchestrator | 2026-02-05 03:03:15.121485 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-05 03:03:15.121498 | orchestrator | Thursday 05 February 2026 03:02:28 +0000 (0:00:23.166) 0:07:53.278 ***** 2026-02-05 03:03:15.121511 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:03:15.121524 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:03:15.121564 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:15.121578 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:15.121590 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:15.121603 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:03:15.121615 | orchestrator | 2026-02-05 03:03:15.121628 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-05 03:03:15.121641 | orchestrator | Thursday 05 February 2026 03:02:37 +0000 (0:00:08.521) 0:08:01.800 ***** 2026-02-05 03:03:15.121670 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:03:15.121681 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:15.121693 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:03:15.121704 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:15.121715 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:15.121726 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-02-05 03:03:15.121737 | orchestrator | 2026-02-05 03:03:15.121748 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-05 03:03:15.121759 | orchestrator | Thursday 05 February 2026 03:02:41 +0000 (0:00:03.706) 0:08:05.506 ***** 2026-02-05 03:03:15.121775 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 03:03:15.121795 | orchestrator | 2026-02-05 03:03:15.121809 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-05 03:03:15.121819 | orchestrator | Thursday 05 February 2026 03:02:54 +0000 (0:00:13.590) 0:08:19.097 ***** 2026-02-05 03:03:15.121830 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 03:03:15.121841 | orchestrator | 2026-02-05 03:03:15.121852 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-05 03:03:15.121862 | orchestrator | Thursday 05 February 2026 03:02:55 +0000 (0:00:01.350) 0:08:20.447 ***** 2026-02-05 03:03:15.121873 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:03:15.121884 | orchestrator | 2026-02-05 03:03:15.121895 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-05 03:03:15.121905 | orchestrator | Thursday 05 February 2026 03:02:57 +0000 (0:00:01.346) 0:08:21.794 ***** 2026-02-05 03:03:15.121916 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 03:03:15.121954 | orchestrator | 2026-02-05 03:03:15.121965 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-05 03:03:15.121976 | orchestrator | Thursday 05 February 2026 03:03:09 +0000 (0:00:12.566) 0:08:34.361 ***** 2026-02-05 03:03:15.121987 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:03:15.121998 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:03:15.122009 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:03:15.122102 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:15.122115 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:15.122126 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:15.122136 | orchestrator | 2026-02-05 03:03:15.122147 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-05 03:03:15.122158 | orchestrator | 2026-02-05 03:03:15.122169 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-05 03:03:15.122180 | orchestrator | Thursday 05 February 2026 03:03:11 +0000 (0:00:01.788) 0:08:36.149 ***** 2026-02-05 03:03:15.122191 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:03:15.122202 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:03:15.122213 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:03:15.122224 | orchestrator | 2026-02-05 03:03:15.122235 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-05 03:03:15.122245 | orchestrator | 2026-02-05 03:03:15.122256 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-05 03:03:15.122267 | orchestrator | Thursday 05 February 2026 03:03:12 +0000 (0:00:01.207) 0:08:37.357 ***** 2026-02-05 03:03:15.122278 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:15.122289 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:15.122300 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:15.122320 | orchestrator | 2026-02-05 03:03:15.122331 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-05 03:03:15.122342 | orchestrator | 2026-02-05 03:03:15.122353 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-05 03:03:15.122364 | orchestrator | Thursday 05 February 2026 03:03:13 +0000 (0:00:00.599) 0:08:37.956 ***** 2026-02-05 03:03:15.122374 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-05 03:03:15.122385 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-05 03:03:15.122397 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-05 03:03:15.122407 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-05 03:03:15.122418 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-05 03:03:15.122429 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-05 03:03:15.122440 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:03:15.122451 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-05 03:03:15.122461 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-05 03:03:15.122472 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-05 03:03:15.122483 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-05 03:03:15.122494 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-05 03:03:15.122504 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-05 03:03:15.122515 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:03:15.122527 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-05 03:03:15.122537 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-05 03:03:15.122548 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-05 03:03:15.122559 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-05 03:03:15.122569 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-05 03:03:15.122580 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-05 03:03:15.122591 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:03:15.122602 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-05 03:03:15.122612 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-05 03:03:15.122623 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-05 03:03:15.122634 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-05 03:03:15.122651 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-05 03:03:15.122663 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-05 03:03:15.122673 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:15.122684 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-05 03:03:15.122695 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-05 03:03:15.122706 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-05 03:03:15.122716 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-05 03:03:15.122727 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-05 03:03:15.122738 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-05 03:03:15.122748 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:15.122759 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-05 03:03:15.122770 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-05 03:03:15.122781 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-05 03:03:15.122792 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-05 03:03:15.122802 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-05 03:03:15.122813 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-05 03:03:15.122831 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:15.122842 | orchestrator | 2026-02-05 03:03:15.122853 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-05 03:03:15.122863 | orchestrator | 2026-02-05 03:03:15.122874 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-05 03:03:15.122885 | orchestrator | Thursday 05 February 2026 03:03:14 +0000 (0:00:01.393) 0:08:39.349 ***** 2026-02-05 03:03:15.122895 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-05 03:03:15.122907 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-05 03:03:15.122918 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:15.122955 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-05 03:03:17.268051 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-05 03:03:17.268143 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:17.268156 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-05 03:03:17.268165 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-05 03:03:17.268174 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:17.268183 | orchestrator | 2026-02-05 03:03:17.268193 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-05 03:03:17.268203 | orchestrator | 2026-02-05 03:03:17.268212 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-05 03:03:17.268221 | orchestrator | Thursday 05 February 2026 03:03:15 +0000 (0:00:00.770) 0:08:40.120 ***** 2026-02-05 03:03:17.268230 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:17.268238 | orchestrator | 2026-02-05 03:03:17.268247 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-05 03:03:17.268255 | orchestrator | 2026-02-05 03:03:17.268264 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-05 03:03:17.268273 | orchestrator | Thursday 05 February 2026 03:03:16 +0000 (0:00:00.708) 0:08:40.828 ***** 2026-02-05 03:03:17.268281 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:17.268290 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:17.268299 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:17.268307 | orchestrator | 2026-02-05 03:03:17.268316 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:03:17.268325 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:03:17.268336 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-05 03:03:17.268345 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-05 03:03:17.268354 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-05 03:03:17.268363 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-05 03:03:17.268371 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-05 03:03:17.268380 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-05 03:03:17.268388 | orchestrator | 2026-02-05 03:03:17.268397 | orchestrator | 2026-02-05 03:03:17.268406 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:03:17.268414 | orchestrator | Thursday 05 February 2026 03:03:16 +0000 (0:00:00.447) 0:08:41.275 ***** 2026-02-05 03:03:17.268423 | orchestrator | =============================================================================== 2026-02-05 03:03:17.268459 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 37.18s 2026-02-05 03:03:17.268468 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.10s 2026-02-05 03:03:17.268477 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.52s 2026-02-05 03:03:17.268511 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.73s 2026-02-05 03:03:17.268520 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.17s 2026-02-05 03:03:17.268530 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.57s 2026-02-05 03:03:17.268539 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.47s 2026-02-05 03:03:17.268548 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 18.07s 2026-02-05 03:03:17.268557 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.22s 2026-02-05 03:03:17.268566 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.81s 2026-02-05 03:03:17.268578 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.59s 2026-02-05 03:03:17.268589 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.00s 2026-02-05 03:03:17.268600 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.94s 2026-02-05 03:03:17.268611 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.57s 2026-02-05 03:03:17.268622 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.57s 2026-02-05 03:03:17.268634 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.41s 2026-02-05 03:03:17.268644 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.85s 2026-02-05 03:03:17.268655 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.52s 2026-02-05 03:03:17.268666 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.38s 2026-02-05 03:03:17.268677 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.08s 2026-02-05 03:03:19.564472 | orchestrator | 2026-02-05 03:03:19 | INFO  | Task c9cb676f-7667-4aa5-b46c-0f9d6294d649 (horizon) was prepared for execution. 2026-02-05 03:03:19.564605 | orchestrator | 2026-02-05 03:03:19 | INFO  | It takes a moment until task c9cb676f-7667-4aa5-b46c-0f9d6294d649 (horizon) has been started and output is visible here. 2026-02-05 03:03:26.925614 | orchestrator | 2026-02-05 03:03:26.925731 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:03:26.925747 | orchestrator | 2026-02-05 03:03:26.925760 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:03:26.925772 | orchestrator | Thursday 05 February 2026 03:03:23 +0000 (0:00:00.254) 0:00:00.255 ***** 2026-02-05 03:03:26.925783 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:26.925795 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:26.925806 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:26.925817 | orchestrator | 2026-02-05 03:03:26.925828 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:03:26.925839 | orchestrator | Thursday 05 February 2026 03:03:23 +0000 (0:00:00.317) 0:00:00.572 ***** 2026-02-05 03:03:26.925850 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-05 03:03:26.925863 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-05 03:03:26.925874 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-05 03:03:26.925885 | orchestrator | 2026-02-05 03:03:26.925896 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-05 03:03:26.925907 | orchestrator | 2026-02-05 03:03:26.925918 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 03:03:26.926119 | orchestrator | Thursday 05 February 2026 03:03:24 +0000 (0:00:00.436) 0:00:01.008 ***** 2026-02-05 03:03:26.926170 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:03:26.926196 | orchestrator | 2026-02-05 03:03:26.926210 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-05 03:03:26.926223 | orchestrator | Thursday 05 February 2026 03:03:24 +0000 (0:00:00.527) 0:00:01.536 ***** 2026-02-05 03:03:26.926261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 03:03:26.926307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 03:03:26.926339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 03:03:26.926354 | orchestrator | 2026-02-05 03:03:26.926367 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-05 03:03:26.926381 | orchestrator | Thursday 05 February 2026 03:03:26 +0000 (0:00:01.167) 0:00:02.703 ***** 2026-02-05 03:03:26.926400 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:26.926419 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:26.926439 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:26.926453 | orchestrator | 2026-02-05 03:03:26.926487 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 03:03:26.926498 | orchestrator | Thursday 05 February 2026 03:03:26 +0000 (0:00:00.477) 0:00:03.180 ***** 2026-02-05 03:03:26.926517 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-05 03:03:33.056010 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-05 03:03:33.056141 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-05 03:03:33.056164 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-05 03:03:33.056183 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-05 03:03:33.056256 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-05 03:03:33.056277 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-05 03:03:33.056295 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-05 03:03:33.056313 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-05 03:03:33.056330 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-05 03:03:33.056347 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-05 03:03:33.056365 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-05 03:03:33.056381 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-05 03:03:33.056398 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-05 03:03:33.056416 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-05 03:03:33.056434 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-05 03:03:33.056452 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-05 03:03:33.056472 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-05 03:03:33.056491 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-05 03:03:33.056510 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-05 03:03:33.056530 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-05 03:03:33.056548 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-05 03:03:33.056568 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-05 03:03:33.056587 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-05 03:03:33.056607 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-05 03:03:33.056630 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-05 03:03:33.056666 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-05 03:03:33.056685 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-05 03:03:33.056702 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-05 03:03:33.056720 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-05 03:03:33.056737 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-05 03:03:33.056755 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-05 03:03:33.056772 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-05 03:03:33.056790 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-05 03:03:33.056818 | orchestrator | 2026-02-05 03:03:33.056838 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 03:03:33.056856 | orchestrator | Thursday 05 February 2026 03:03:27 +0000 (0:00:00.824) 0:00:04.004 ***** 2026-02-05 03:03:33.056874 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:33.056892 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:33.056910 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:33.056949 | orchestrator | 2026-02-05 03:03:33.056968 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 03:03:33.056985 | orchestrator | Thursday 05 February 2026 03:03:27 +0000 (0:00:00.313) 0:00:04.318 ***** 2026-02-05 03:03:33.057002 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:33.057021 | orchestrator | 2026-02-05 03:03:33.057073 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 03:03:33.057091 | orchestrator | Thursday 05 February 2026 03:03:28 +0000 (0:00:00.348) 0:00:04.666 ***** 2026-02-05 03:03:33.057109 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:33.057126 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:33.057143 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:33.057159 | orchestrator | 2026-02-05 03:03:33.057176 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 03:03:33.057193 | orchestrator | Thursday 05 February 2026 03:03:28 +0000 (0:00:00.331) 0:00:04.998 ***** 2026-02-05 03:03:33.057209 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:33.057226 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:33.057243 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:33.057260 | orchestrator | 2026-02-05 03:03:33.057276 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 03:03:33.057293 | orchestrator | Thursday 05 February 2026 03:03:28 +0000 (0:00:00.339) 0:00:05.337 ***** 2026-02-05 03:03:33.057310 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:33.057326 | orchestrator | 2026-02-05 03:03:33.057343 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 03:03:33.057360 | orchestrator | Thursday 05 February 2026 03:03:28 +0000 (0:00:00.139) 0:00:05.477 ***** 2026-02-05 03:03:33.057378 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:33.057395 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:33.057412 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:33.057428 | orchestrator | 2026-02-05 03:03:33.057444 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 03:03:33.057459 | orchestrator | Thursday 05 February 2026 03:03:29 +0000 (0:00:00.320) 0:00:05.797 ***** 2026-02-05 03:03:33.057476 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:33.057493 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:33.057508 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:33.057524 | orchestrator | 2026-02-05 03:03:33.057541 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 03:03:33.057557 | orchestrator | Thursday 05 February 2026 03:03:29 +0000 (0:00:00.537) 0:00:06.335 ***** 2026-02-05 03:03:33.057572 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:33.057588 | orchestrator | 2026-02-05 03:03:33.057605 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 03:03:33.057622 | orchestrator | Thursday 05 February 2026 03:03:29 +0000 (0:00:00.139) 0:00:06.475 ***** 2026-02-05 03:03:33.057639 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:33.057655 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:33.057671 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:33.057688 | orchestrator | 2026-02-05 03:03:33.057703 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 03:03:33.057721 | orchestrator | Thursday 05 February 2026 03:03:30 +0000 (0:00:00.299) 0:00:06.775 ***** 2026-02-05 03:03:33.057738 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:33.057754 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:33.057770 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:33.057798 | orchestrator | 2026-02-05 03:03:33.057815 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 03:03:33.057833 | orchestrator | Thursday 05 February 2026 03:03:30 +0000 (0:00:00.348) 0:00:07.123 ***** 2026-02-05 03:03:33.057849 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:33.057865 | orchestrator | 2026-02-05 03:03:33.057881 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 03:03:33.057897 | orchestrator | Thursday 05 February 2026 03:03:30 +0000 (0:00:00.132) 0:00:07.256 ***** 2026-02-05 03:03:33.057914 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:33.057972 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:33.057991 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:33.058008 | orchestrator | 2026-02-05 03:03:33.058108 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 03:03:33.058126 | orchestrator | Thursday 05 February 2026 03:03:31 +0000 (0:00:00.504) 0:00:07.760 ***** 2026-02-05 03:03:33.058143 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:33.058159 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:33.058175 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:33.058191 | orchestrator | 2026-02-05 03:03:33.058208 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 03:03:33.058225 | orchestrator | Thursday 05 February 2026 03:03:31 +0000 (0:00:00.374) 0:00:08.134 ***** 2026-02-05 03:03:33.058242 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:33.058259 | orchestrator | 2026-02-05 03:03:33.058274 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 03:03:33.058290 | orchestrator | Thursday 05 February 2026 03:03:31 +0000 (0:00:00.135) 0:00:08.270 ***** 2026-02-05 03:03:33.058306 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:33.058324 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:33.058341 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:33.058357 | orchestrator | 2026-02-05 03:03:33.058373 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 03:03:33.058389 | orchestrator | Thursday 05 February 2026 03:03:32 +0000 (0:00:00.305) 0:00:08.576 ***** 2026-02-05 03:03:33.058405 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:33.058423 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:33.058440 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:33.058457 | orchestrator | 2026-02-05 03:03:33.058472 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 03:03:33.058488 | orchestrator | Thursday 05 February 2026 03:03:32 +0000 (0:00:00.339) 0:00:08.915 ***** 2026-02-05 03:03:33.058504 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:33.058522 | orchestrator | 2026-02-05 03:03:33.058539 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 03:03:33.058556 | orchestrator | Thursday 05 February 2026 03:03:32 +0000 (0:00:00.128) 0:00:09.044 ***** 2026-02-05 03:03:33.058572 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:33.058588 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:33.058604 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:33.058622 | orchestrator | 2026-02-05 03:03:33.058639 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 03:03:33.058668 | orchestrator | Thursday 05 February 2026 03:03:33 +0000 (0:00:00.575) 0:00:09.620 ***** 2026-02-05 03:03:46.023956 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:46.024058 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:46.024069 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:46.024078 | orchestrator | 2026-02-05 03:03:46.024086 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 03:03:46.024095 | orchestrator | Thursday 05 February 2026 03:03:33 +0000 (0:00:00.319) 0:00:09.939 ***** 2026-02-05 03:03:46.024104 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:46.024112 | orchestrator | 2026-02-05 03:03:46.024120 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 03:03:46.024146 | orchestrator | Thursday 05 February 2026 03:03:33 +0000 (0:00:00.128) 0:00:10.068 ***** 2026-02-05 03:03:46.024154 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:46.024161 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:46.024170 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:46.024177 | orchestrator | 2026-02-05 03:03:46.024185 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 03:03:46.024192 | orchestrator | Thursday 05 February 2026 03:03:33 +0000 (0:00:00.299) 0:00:10.368 ***** 2026-02-05 03:03:46.024200 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:46.024207 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:46.024214 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:46.024222 | orchestrator | 2026-02-05 03:03:46.024229 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 03:03:46.024236 | orchestrator | Thursday 05 February 2026 03:03:34 +0000 (0:00:00.526) 0:00:10.894 ***** 2026-02-05 03:03:46.024243 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:46.024251 | orchestrator | 2026-02-05 03:03:46.024258 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 03:03:46.024265 | orchestrator | Thursday 05 February 2026 03:03:34 +0000 (0:00:00.140) 0:00:11.035 ***** 2026-02-05 03:03:46.024272 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:46.024280 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:46.024287 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:46.024294 | orchestrator | 2026-02-05 03:03:46.024301 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 03:03:46.024309 | orchestrator | Thursday 05 February 2026 03:03:34 +0000 (0:00:00.290) 0:00:11.325 ***** 2026-02-05 03:03:46.024316 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:46.024323 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:46.024330 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:46.024338 | orchestrator | 2026-02-05 03:03:46.024345 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 03:03:46.024352 | orchestrator | Thursday 05 February 2026 03:03:35 +0000 (0:00:00.330) 0:00:11.656 ***** 2026-02-05 03:03:46.024359 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:46.024366 | orchestrator | 2026-02-05 03:03:46.024374 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 03:03:46.024381 | orchestrator | Thursday 05 February 2026 03:03:35 +0000 (0:00:00.137) 0:00:11.793 ***** 2026-02-05 03:03:46.024388 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:46.024396 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:46.024403 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:46.024410 | orchestrator | 2026-02-05 03:03:46.024417 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 03:03:46.024425 | orchestrator | Thursday 05 February 2026 03:03:35 +0000 (0:00:00.317) 0:00:12.111 ***** 2026-02-05 03:03:46.024432 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:03:46.024439 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:03:46.024446 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:03:46.024453 | orchestrator | 2026-02-05 03:03:46.024461 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 03:03:46.024480 | orchestrator | Thursday 05 February 2026 03:03:36 +0000 (0:00:00.528) 0:00:12.640 ***** 2026-02-05 03:03:46.024490 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:46.024498 | orchestrator | 2026-02-05 03:03:46.024507 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 03:03:46.024516 | orchestrator | Thursday 05 February 2026 03:03:36 +0000 (0:00:00.137) 0:00:12.777 ***** 2026-02-05 03:03:46.024524 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:46.024534 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:46.024542 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:46.024551 | orchestrator | 2026-02-05 03:03:46.024560 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-05 03:03:46.024569 | orchestrator | Thursday 05 February 2026 03:03:36 +0000 (0:00:00.303) 0:00:13.080 ***** 2026-02-05 03:03:46.024583 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:03:46.024592 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:03:46.024601 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:03:46.024609 | orchestrator | 2026-02-05 03:03:46.024618 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-05 03:03:46.024627 | orchestrator | Thursday 05 February 2026 03:03:38 +0000 (0:00:01.625) 0:00:14.706 ***** 2026-02-05 03:03:46.024646 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-05 03:03:46.024656 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-05 03:03:46.024664 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-05 03:03:46.024673 | orchestrator | 2026-02-05 03:03:46.024691 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-05 03:03:46.024700 | orchestrator | Thursday 05 February 2026 03:03:40 +0000 (0:00:01.967) 0:00:16.673 ***** 2026-02-05 03:03:46.024710 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-05 03:03:46.024720 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-05 03:03:46.024727 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-05 03:03:46.024734 | orchestrator | 2026-02-05 03:03:46.024742 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-05 03:03:46.024762 | orchestrator | Thursday 05 February 2026 03:03:41 +0000 (0:00:01.664) 0:00:18.337 ***** 2026-02-05 03:03:46.024770 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-05 03:03:46.024778 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-05 03:03:46.024785 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-05 03:03:46.024792 | orchestrator | 2026-02-05 03:03:46.024800 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-05 03:03:46.024807 | orchestrator | Thursday 05 February 2026 03:03:43 +0000 (0:00:01.447) 0:00:19.785 ***** 2026-02-05 03:03:46.024814 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:46.024822 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:46.024829 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:46.024836 | orchestrator | 2026-02-05 03:03:46.024843 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-05 03:03:46.024851 | orchestrator | Thursday 05 February 2026 03:03:43 +0000 (0:00:00.291) 0:00:20.076 ***** 2026-02-05 03:03:46.024858 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:46.024865 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:46.024872 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:46.024880 | orchestrator | 2026-02-05 03:03:46.024887 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 03:03:46.024894 | orchestrator | Thursday 05 February 2026 03:03:43 +0000 (0:00:00.418) 0:00:20.495 ***** 2026-02-05 03:03:46.024902 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:03:46.024909 | orchestrator | 2026-02-05 03:03:46.024916 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-05 03:03:46.024923 | orchestrator | Thursday 05 February 2026 03:03:44 +0000 (0:00:00.624) 0:00:21.119 ***** 2026-02-05 03:03:46.024981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 03:03:46.025009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 03:03:47.057351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 03:03:47.057469 | orchestrator | 2026-02-05 03:03:47.057480 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-05 03:03:47.057488 | orchestrator | Thursday 05 February 2026 03:03:46 +0000 (0:00:01.461) 0:00:22.581 ***** 2026-02-05 03:03:47.057516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 03:03:47.057535 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:47.057554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 03:03:47.057566 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:47.057584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 03:03:49.152762 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:49.152860 | orchestrator | 2026-02-05 03:03:49.152874 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-05 03:03:49.152886 | orchestrator | Thursday 05 February 2026 03:03:47 +0000 (0:00:01.039) 0:00:23.620 ***** 2026-02-05 03:03:49.152901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 03:03:49.152915 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:03:49.153000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 03:03:49.153038 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:03:49.153080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 03:03:49.153092 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:03:49.153102 | orchestrator | 2026-02-05 03:03:49.153113 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-05 03:03:49.153131 | orchestrator | Thursday 05 February 2026 03:03:47 +0000 (0:00:00.858) 0:00:24.479 ***** 2026-02-05 03:03:49.153157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 03:04:37.782297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 03:04:37.782487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 03:04:37.782508 | orchestrator | 2026-02-05 03:04:37.782522 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 03:04:37.782534 | orchestrator | Thursday 05 February 2026 03:03:49 +0000 (0:00:01.238) 0:00:25.718 ***** 2026-02-05 03:04:37.782544 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:04:37.782556 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:04:37.782565 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:04:37.782575 | orchestrator | 2026-02-05 03:04:37.782585 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 03:04:37.782595 | orchestrator | Thursday 05 February 2026 03:03:49 +0000 (0:00:00.506) 0:00:26.224 ***** 2026-02-05 03:04:37.782605 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:04:37.782615 | orchestrator | 2026-02-05 03:04:37.782624 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-05 03:04:37.782634 | orchestrator | Thursday 05 February 2026 03:03:50 +0000 (0:00:00.551) 0:00:26.776 ***** 2026-02-05 03:04:37.782644 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:04:37.782653 | orchestrator | 2026-02-05 03:04:37.782663 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-05 03:04:37.782672 | orchestrator | Thursday 05 February 2026 03:03:52 +0000 (0:00:02.376) 0:00:29.153 ***** 2026-02-05 03:04:37.782691 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:04:37.782701 | orchestrator | 2026-02-05 03:04:37.782711 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-05 03:04:37.782721 | orchestrator | Thursday 05 February 2026 03:03:55 +0000 (0:00:02.432) 0:00:31.585 ***** 2026-02-05 03:04:37.782730 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:04:37.782740 | orchestrator | 2026-02-05 03:04:37.782749 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-05 03:04:37.782759 | orchestrator | Thursday 05 February 2026 03:04:12 +0000 (0:00:17.711) 0:00:49.297 ***** 2026-02-05 03:04:37.782769 | orchestrator | 2026-02-05 03:04:37.782778 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-05 03:04:37.782788 | orchestrator | Thursday 05 February 2026 03:04:12 +0000 (0:00:00.067) 0:00:49.365 ***** 2026-02-05 03:04:37.782797 | orchestrator | 2026-02-05 03:04:37.782807 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-05 03:04:37.782818 | orchestrator | Thursday 05 February 2026 03:04:13 +0000 (0:00:00.233) 0:00:49.599 ***** 2026-02-05 03:04:37.782830 | orchestrator | 2026-02-05 03:04:37.782841 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-05 03:04:37.782852 | orchestrator | Thursday 05 February 2026 03:04:13 +0000 (0:00:00.077) 0:00:49.676 ***** 2026-02-05 03:04:37.782863 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:04:37.782874 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:04:37.782886 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:04:37.782896 | orchestrator | 2026-02-05 03:04:37.782908 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:04:37.782921 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-05 03:04:37.782959 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-05 03:04:37.782973 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-05 03:04:37.782985 | orchestrator | 2026-02-05 03:04:37.782996 | orchestrator | 2026-02-05 03:04:37.783009 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:04:37.783021 | orchestrator | Thursday 05 February 2026 03:04:37 +0000 (0:00:24.642) 0:01:14.319 ***** 2026-02-05 03:04:37.783033 | orchestrator | =============================================================================== 2026-02-05 03:04:37.783044 | orchestrator | horizon : Restart horizon container ------------------------------------ 24.64s 2026-02-05 03:04:37.783056 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.71s 2026-02-05 03:04:37.783073 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.43s 2026-02-05 03:04:37.783086 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.38s 2026-02-05 03:04:37.783097 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.97s 2026-02-05 03:04:37.783106 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.66s 2026-02-05 03:04:37.783116 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.63s 2026-02-05 03:04:37.783126 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.46s 2026-02-05 03:04:37.783136 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.45s 2026-02-05 03:04:37.783145 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.24s 2026-02-05 03:04:37.783155 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.17s 2026-02-05 03:04:37.783164 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.04s 2026-02-05 03:04:37.783174 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.86s 2026-02-05 03:04:37.783199 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.82s 2026-02-05 03:04:38.150642 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-02-05 03:04:38.150767 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.58s 2026-02-05 03:04:38.150784 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-02-05 03:04:38.150796 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2026-02-05 03:04:38.150807 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-02-05 03:04:38.150818 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2026-02-05 03:04:40.489807 | orchestrator | 2026-02-05 03:04:40 | INFO  | Task 828c8afb-03d7-45f1-b56b-e5053d18ea0c (skyline) was prepared for execution. 2026-02-05 03:04:40.489893 | orchestrator | 2026-02-05 03:04:40 | INFO  | It takes a moment until task 828c8afb-03d7-45f1-b56b-e5053d18ea0c (skyline) has been started and output is visible here. 2026-02-05 03:05:12.573635 | orchestrator | 2026-02-05 03:05:12.573752 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:05:12.573769 | orchestrator | 2026-02-05 03:05:12.573781 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:05:12.573792 | orchestrator | Thursday 05 February 2026 03:04:44 +0000 (0:00:00.258) 0:00:00.258 ***** 2026-02-05 03:05:12.573803 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:05:12.573816 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:05:12.573827 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:05:12.573838 | orchestrator | 2026-02-05 03:05:12.573849 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:05:12.573860 | orchestrator | Thursday 05 February 2026 03:04:44 +0000 (0:00:00.328) 0:00:00.587 ***** 2026-02-05 03:05:12.573871 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-05 03:05:12.573883 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-05 03:05:12.573894 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-05 03:05:12.573905 | orchestrator | 2026-02-05 03:05:12.573916 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-05 03:05:12.573927 | orchestrator | 2026-02-05 03:05:12.574005 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-05 03:05:12.574079 | orchestrator | Thursday 05 February 2026 03:04:45 +0000 (0:00:00.423) 0:00:01.010 ***** 2026-02-05 03:05:12.574092 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:05:12.574103 | orchestrator | 2026-02-05 03:05:12.574124 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-05 03:05:12.574135 | orchestrator | Thursday 05 February 2026 03:04:45 +0000 (0:00:00.545) 0:00:01.556 ***** 2026-02-05 03:05:12.574146 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-05 03:05:12.574158 | orchestrator | 2026-02-05 03:05:12.574171 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-05 03:05:12.574184 | orchestrator | Thursday 05 February 2026 03:04:49 +0000 (0:00:03.583) 0:00:05.140 ***** 2026-02-05 03:05:12.574197 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-05 03:05:12.574211 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-05 03:05:12.574225 | orchestrator | 2026-02-05 03:05:12.574238 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-05 03:05:12.574252 | orchestrator | Thursday 05 February 2026 03:04:56 +0000 (0:00:06.762) 0:00:11.902 ***** 2026-02-05 03:05:12.574266 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 03:05:12.574280 | orchestrator | 2026-02-05 03:05:12.574294 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-05 03:05:12.574336 | orchestrator | Thursday 05 February 2026 03:04:59 +0000 (0:00:03.334) 0:00:15.236 ***** 2026-02-05 03:05:12.574351 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 03:05:12.574364 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-05 03:05:12.574375 | orchestrator | 2026-02-05 03:05:12.574386 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-05 03:05:12.574397 | orchestrator | Thursday 05 February 2026 03:05:03 +0000 (0:00:04.095) 0:00:19.332 ***** 2026-02-05 03:05:12.574421 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 03:05:12.574433 | orchestrator | 2026-02-05 03:05:12.574443 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-05 03:05:12.574454 | orchestrator | Thursday 05 February 2026 03:05:07 +0000 (0:00:03.474) 0:00:22.807 ***** 2026-02-05 03:05:12.574465 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-05 03:05:12.574476 | orchestrator | 2026-02-05 03:05:12.574486 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-05 03:05:12.574497 | orchestrator | Thursday 05 February 2026 03:05:11 +0000 (0:00:03.948) 0:00:26.755 ***** 2026-02-05 03:05:12.574511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:12.574547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:12.574560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:12.574586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:12.574600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:12.574620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:16.439834 | orchestrator | 2026-02-05 03:05:16.439935 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-05 03:05:16.440016 | orchestrator | Thursday 05 February 2026 03:05:12 +0000 (0:00:01.391) 0:00:28.146 ***** 2026-02-05 03:05:16.440036 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:05:16.440048 | orchestrator | 2026-02-05 03:05:16.440058 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-05 03:05:16.440068 | orchestrator | Thursday 05 February 2026 03:05:13 +0000 (0:00:00.712) 0:00:28.858 ***** 2026-02-05 03:05:16.440081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:16.440133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:16.440148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:16.440192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:16.440215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:16.440243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:16.440259 | orchestrator | 2026-02-05 03:05:16.440274 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-05 03:05:16.440297 | orchestrator | Thursday 05 February 2026 03:05:15 +0000 (0:00:02.530) 0:00:31.388 ***** 2026-02-05 03:05:16.440314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 03:05:16.440331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 03:05:16.440349 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:05:16.440380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 03:05:17.664076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 03:05:17.664191 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:05:17.664229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 03:05:17.664242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 03:05:17.664253 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:05:17.664264 | orchestrator | 2026-02-05 03:05:17.664275 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-05 03:05:17.664286 | orchestrator | Thursday 05 February 2026 03:05:16 +0000 (0:00:00.632) 0:00:32.021 ***** 2026-02-05 03:05:17.664296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 03:05:17.664347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 03:05:17.664359 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:05:17.664374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 03:05:17.664385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 03:05:17.664396 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:05:17.664406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 03:05:17.664434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 03:05:26.027198 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:05:26.027326 | orchestrator | 2026-02-05 03:05:26.027344 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-05 03:05:26.027358 | orchestrator | Thursday 05 February 2026 03:05:17 +0000 (0:00:01.215) 0:00:33.236 ***** 2026-02-05 03:05:26.027388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:26.027405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:26.027417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:26.027453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:26.027492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:26.027506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:26.027518 | orchestrator | 2026-02-05 03:05:26.027530 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-05 03:05:26.027541 | orchestrator | Thursday 05 February 2026 03:05:20 +0000 (0:00:02.462) 0:00:35.698 ***** 2026-02-05 03:05:26.027553 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-05 03:05:26.027564 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-05 03:05:26.027575 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-05 03:05:26.027586 | orchestrator | 2026-02-05 03:05:26.027597 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-05 03:05:26.027616 | orchestrator | Thursday 05 February 2026 03:05:21 +0000 (0:00:01.608) 0:00:37.307 ***** 2026-02-05 03:05:26.027627 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-05 03:05:26.027638 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-05 03:05:26.027649 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-05 03:05:26.027659 | orchestrator | 2026-02-05 03:05:26.027670 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-05 03:05:26.027700 | orchestrator | Thursday 05 February 2026 03:05:23 +0000 (0:00:01.826) 0:00:39.133 ***** 2026-02-05 03:05:26.027715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:26.027739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:28.265698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:28.265799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:28.265834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:28.265842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:28.265848 | orchestrator | 2026-02-05 03:05:28.265856 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-05 03:05:28.265864 | orchestrator | Thursday 05 February 2026 03:05:26 +0000 (0:00:02.471) 0:00:41.605 ***** 2026-02-05 03:05:28.265871 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:05:28.265878 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:05:28.265884 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:05:28.265889 | orchestrator | 2026-02-05 03:05:28.265909 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-05 03:05:28.265922 | orchestrator | Thursday 05 February 2026 03:05:26 +0000 (0:00:00.305) 0:00:41.910 ***** 2026-02-05 03:05:28.265928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:28.265981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:28.265988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:28.265994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:05:28.266012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:06:01.366009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 03:06:01.366209 | orchestrator | 2026-02-05 03:06:01.366228 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-05 03:06:01.366242 | orchestrator | Thursday 05 February 2026 03:05:28 +0000 (0:00:01.932) 0:00:43.843 ***** 2026-02-05 03:06:01.366253 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:06:01.366265 | orchestrator | 2026-02-05 03:06:01.366277 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-05 03:06:01.366288 | orchestrator | Thursday 05 February 2026 03:05:30 +0000 (0:00:02.285) 0:00:46.128 ***** 2026-02-05 03:06:01.366299 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:06:01.366309 | orchestrator | 2026-02-05 03:06:01.366321 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-05 03:06:01.366332 | orchestrator | Thursday 05 February 2026 03:05:32 +0000 (0:00:02.317) 0:00:48.446 ***** 2026-02-05 03:06:01.366344 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:06:01.366356 | orchestrator | 2026-02-05 03:06:01.366369 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-05 03:06:01.366381 | orchestrator | Thursday 05 February 2026 03:05:40 +0000 (0:00:07.806) 0:00:56.253 ***** 2026-02-05 03:06:01.366392 | orchestrator | 2026-02-05 03:06:01.366403 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-05 03:06:01.366414 | orchestrator | Thursday 05 February 2026 03:05:40 +0000 (0:00:00.248) 0:00:56.501 ***** 2026-02-05 03:06:01.366425 | orchestrator | 2026-02-05 03:06:01.366436 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-05 03:06:01.366447 | orchestrator | Thursday 05 February 2026 03:05:40 +0000 (0:00:00.080) 0:00:56.582 ***** 2026-02-05 03:06:01.366458 | orchestrator | 2026-02-05 03:06:01.366469 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-05 03:06:01.366479 | orchestrator | Thursday 05 February 2026 03:05:41 +0000 (0:00:00.070) 0:00:56.652 ***** 2026-02-05 03:06:01.366491 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:06:01.366505 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:06:01.366519 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:06:01.366532 | orchestrator | 2026-02-05 03:06:01.366545 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-05 03:06:01.366558 | orchestrator | Thursday 05 February 2026 03:05:51 +0000 (0:00:10.805) 0:01:07.458 ***** 2026-02-05 03:06:01.366570 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:06:01.366584 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:06:01.366596 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:06:01.366609 | orchestrator | 2026-02-05 03:06:01.366622 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:06:01.366637 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 03:06:01.366656 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 03:06:01.366675 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 03:06:01.366706 | orchestrator | 2026-02-05 03:06:01.366725 | orchestrator | 2026-02-05 03:06:01.366743 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:06:01.366761 | orchestrator | Thursday 05 February 2026 03:06:00 +0000 (0:00:09.015) 0:01:16.474 ***** 2026-02-05 03:06:01.366780 | orchestrator | =============================================================================== 2026-02-05 03:06:01.366816 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 10.81s 2026-02-05 03:06:01.366836 | orchestrator | skyline : Restart skyline-console container ----------------------------- 9.02s 2026-02-05 03:06:01.366855 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.81s 2026-02-05 03:06:01.366873 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.76s 2026-02-05 03:06:01.366892 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.10s 2026-02-05 03:06:01.366910 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.95s 2026-02-05 03:06:01.366928 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.58s 2026-02-05 03:06:01.367020 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.47s 2026-02-05 03:06:01.367068 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.33s 2026-02-05 03:06:01.367089 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.53s 2026-02-05 03:06:01.367107 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.47s 2026-02-05 03:06:01.367195 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.46s 2026-02-05 03:06:01.367207 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.32s 2026-02-05 03:06:01.367218 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.29s 2026-02-05 03:06:01.367229 | orchestrator | skyline : Check skyline container --------------------------------------- 1.93s 2026-02-05 03:06:01.367240 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 1.83s 2026-02-05 03:06:01.367250 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.61s 2026-02-05 03:06:01.367261 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.39s 2026-02-05 03:06:01.367272 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.22s 2026-02-05 03:06:01.367283 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.71s 2026-02-05 03:06:03.669441 | orchestrator | 2026-02-05 03:06:03 | INFO  | Task 7b6b6cf4-aaa2-48fc-bc7d-1c0cdaecd412 (glance) was prepared for execution. 2026-02-05 03:06:03.669545 | orchestrator | 2026-02-05 03:06:03 | INFO  | It takes a moment until task 7b6b6cf4-aaa2-48fc-bc7d-1c0cdaecd412 (glance) has been started and output is visible here. 2026-02-05 03:06:38.249237 | orchestrator | 2026-02-05 03:06:38.249351 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:06:38.249368 | orchestrator | 2026-02-05 03:06:38.249381 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:06:38.249393 | orchestrator | Thursday 05 February 2026 03:06:07 +0000 (0:00:00.260) 0:00:00.260 ***** 2026-02-05 03:06:38.249405 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:06:38.249417 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:06:38.249428 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:06:38.249439 | orchestrator | 2026-02-05 03:06:38.249450 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:06:38.249462 | orchestrator | Thursday 05 February 2026 03:06:08 +0000 (0:00:00.319) 0:00:00.579 ***** 2026-02-05 03:06:38.249473 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-05 03:06:38.249484 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-05 03:06:38.249495 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-05 03:06:38.249531 | orchestrator | 2026-02-05 03:06:38.249543 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-05 03:06:38.249554 | orchestrator | 2026-02-05 03:06:38.249565 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-05 03:06:38.249576 | orchestrator | Thursday 05 February 2026 03:06:08 +0000 (0:00:00.452) 0:00:01.032 ***** 2026-02-05 03:06:38.249587 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:06:38.249599 | orchestrator | 2026-02-05 03:06:38.249610 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-05 03:06:38.249621 | orchestrator | Thursday 05 February 2026 03:06:09 +0000 (0:00:00.573) 0:00:01.605 ***** 2026-02-05 03:06:38.249632 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-05 03:06:38.249643 | orchestrator | 2026-02-05 03:06:38.249654 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-05 03:06:38.249665 | orchestrator | Thursday 05 February 2026 03:06:12 +0000 (0:00:03.599) 0:00:05.205 ***** 2026-02-05 03:06:38.249676 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-05 03:06:38.249687 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-05 03:06:38.249698 | orchestrator | 2026-02-05 03:06:38.249709 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-05 03:06:38.249721 | orchestrator | Thursday 05 February 2026 03:06:19 +0000 (0:00:06.602) 0:00:11.807 ***** 2026-02-05 03:06:38.249732 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 03:06:38.249743 | orchestrator | 2026-02-05 03:06:38.249754 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-05 03:06:38.249765 | orchestrator | Thursday 05 February 2026 03:06:22 +0000 (0:00:03.369) 0:00:15.176 ***** 2026-02-05 03:06:38.249776 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 03:06:38.249787 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-05 03:06:38.249798 | orchestrator | 2026-02-05 03:06:38.249825 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-05 03:06:38.249836 | orchestrator | Thursday 05 February 2026 03:06:26 +0000 (0:00:04.126) 0:00:19.303 ***** 2026-02-05 03:06:38.249847 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 03:06:38.249858 | orchestrator | 2026-02-05 03:06:38.249869 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-05 03:06:38.249880 | orchestrator | Thursday 05 February 2026 03:06:30 +0000 (0:00:03.408) 0:00:22.711 ***** 2026-02-05 03:06:38.249891 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-05 03:06:38.249902 | orchestrator | 2026-02-05 03:06:38.249913 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-05 03:06:38.249923 | orchestrator | Thursday 05 February 2026 03:06:34 +0000 (0:00:03.902) 0:00:26.613 ***** 2026-02-05 03:06:38.249995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 03:06:38.250081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 03:06:38.250103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 03:06:38.250133 | orchestrator | 2026-02-05 03:06:38.250145 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-05 03:06:38.250156 | orchestrator | Thursday 05 February 2026 03:06:37 +0000 (0:00:03.231) 0:00:29.845 ***** 2026-02-05 03:06:38.250167 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:06:38.250179 | orchestrator | 2026-02-05 03:06:38.250198 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-05 03:06:52.925942 | orchestrator | Thursday 05 February 2026 03:06:38 +0000 (0:00:00.781) 0:00:30.626 ***** 2026-02-05 03:06:52.926145 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:06:52.926161 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:06:52.926169 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:06:52.926177 | orchestrator | 2026-02-05 03:06:52.926186 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-05 03:06:52.926194 | orchestrator | Thursday 05 February 2026 03:06:41 +0000 (0:00:03.242) 0:00:33.869 ***** 2026-02-05 03:06:52.926202 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 03:06:52.926211 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 03:06:52.926219 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 03:06:52.926227 | orchestrator | 2026-02-05 03:06:52.926234 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-05 03:06:52.926242 | orchestrator | Thursday 05 February 2026 03:06:43 +0000 (0:00:01.538) 0:00:35.407 ***** 2026-02-05 03:06:52.926249 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 03:06:52.926257 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 03:06:52.926264 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 03:06:52.926272 | orchestrator | 2026-02-05 03:06:52.926280 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-05 03:06:52.926292 | orchestrator | Thursday 05 February 2026 03:06:44 +0000 (0:00:01.210) 0:00:36.617 ***** 2026-02-05 03:06:52.926304 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:06:52.926316 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:06:52.926328 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:06:52.926340 | orchestrator | 2026-02-05 03:06:52.926352 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-05 03:06:52.926360 | orchestrator | Thursday 05 February 2026 03:06:45 +0000 (0:00:00.897) 0:00:37.515 ***** 2026-02-05 03:06:52.926368 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:06:52.926376 | orchestrator | 2026-02-05 03:06:52.926383 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-05 03:06:52.926399 | orchestrator | Thursday 05 February 2026 03:06:45 +0000 (0:00:00.138) 0:00:37.654 ***** 2026-02-05 03:06:52.926407 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:06:52.926415 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:06:52.926422 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:06:52.926429 | orchestrator | 2026-02-05 03:06:52.926437 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-05 03:06:52.926444 | orchestrator | Thursday 05 February 2026 03:06:45 +0000 (0:00:00.314) 0:00:37.968 ***** 2026-02-05 03:06:52.926467 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:06:52.926476 | orchestrator | 2026-02-05 03:06:52.926485 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-05 03:06:52.926494 | orchestrator | Thursday 05 February 2026 03:06:46 +0000 (0:00:00.755) 0:00:38.724 ***** 2026-02-05 03:06:52.926529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 03:06:52.926561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 03:06:52.926577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 03:06:52.926594 | orchestrator | 2026-02-05 03:06:52.926602 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-05 03:06:52.926609 | orchestrator | Thursday 05 February 2026 03:06:50 +0000 (0:00:03.732) 0:00:42.456 ***** 2026-02-05 03:06:52.926624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 03:06:56.377307 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:06:56.377427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 03:06:56.377467 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:06:56.377479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 03:06:56.377489 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:06:56.377499 | orchestrator | 2026-02-05 03:06:56.377509 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-05 03:06:56.377518 | orchestrator | Thursday 05 February 2026 03:06:52 +0000 (0:00:02.850) 0:00:45.307 ***** 2026-02-05 03:06:56.377550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 03:06:56.377569 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:06:56.377579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 03:06:56.377589 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:06:56.377606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 03:07:29.236430 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:07:29.236530 | orchestrator | 2026-02-05 03:07:29.236542 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-05 03:07:29.236553 | orchestrator | Thursday 05 February 2026 03:06:56 +0000 (0:00:03.447) 0:00:48.754 ***** 2026-02-05 03:07:29.236562 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:07:29.236571 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:07:29.236579 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:07:29.236587 | orchestrator | 2026-02-05 03:07:29.236611 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-05 03:07:29.236620 | orchestrator | Thursday 05 February 2026 03:06:59 +0000 (0:00:03.284) 0:00:52.039 ***** 2026-02-05 03:07:29.236631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 03:07:29.236644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 03:07:29.236693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 03:07:29.236704 | orchestrator | 2026-02-05 03:07:29.236712 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-05 03:07:29.236721 | orchestrator | Thursday 05 February 2026 03:07:03 +0000 (0:00:03.618) 0:00:55.657 ***** 2026-02-05 03:07:29.236729 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:07:29.236738 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:07:29.236746 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:07:29.236754 | orchestrator | 2026-02-05 03:07:29.236762 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-05 03:07:29.236770 | orchestrator | Thursday 05 February 2026 03:07:08 +0000 (0:00:05.382) 0:01:01.040 ***** 2026-02-05 03:07:29.236779 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:07:29.236787 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:07:29.236795 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:07:29.236804 | orchestrator | 2026-02-05 03:07:29.236812 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-05 03:07:29.236820 | orchestrator | Thursday 05 February 2026 03:07:12 +0000 (0:00:03.460) 0:01:04.500 ***** 2026-02-05 03:07:29.236829 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:07:29.236856 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:07:29.236864 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:07:29.236872 | orchestrator | 2026-02-05 03:07:29.236881 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-05 03:07:29.236889 | orchestrator | Thursday 05 February 2026 03:07:14 +0000 (0:00:02.825) 0:01:07.326 ***** 2026-02-05 03:07:29.236897 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:07:29.236905 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:07:29.236913 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:07:29.236921 | orchestrator | 2026-02-05 03:07:29.236930 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-05 03:07:29.236938 | orchestrator | Thursday 05 February 2026 03:07:18 +0000 (0:00:03.276) 0:01:10.602 ***** 2026-02-05 03:07:29.236946 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:07:29.237014 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:07:29.237022 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:07:29.237038 | orchestrator | 2026-02-05 03:07:29.237046 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-05 03:07:29.237054 | orchestrator | Thursday 05 February 2026 03:07:21 +0000 (0:00:03.307) 0:01:13.910 ***** 2026-02-05 03:07:29.237063 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:07:29.237072 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:07:29.237080 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:07:29.237089 | orchestrator | 2026-02-05 03:07:29.237097 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-05 03:07:29.237106 | orchestrator | Thursday 05 February 2026 03:07:22 +0000 (0:00:00.526) 0:01:14.437 ***** 2026-02-05 03:07:29.237115 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-05 03:07:29.237124 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:07:29.237133 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-05 03:07:29.237141 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:07:29.237150 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-05 03:07:29.237158 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:07:29.237167 | orchestrator | 2026-02-05 03:07:29.237175 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-05 03:07:29.237184 | orchestrator | Thursday 05 February 2026 03:07:25 +0000 (0:00:03.033) 0:01:17.470 ***** 2026-02-05 03:07:29.237192 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:07:29.237201 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:07:29.237209 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:07:29.237218 | orchestrator | 2026-02-05 03:07:29.237226 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-05 03:07:29.237240 | orchestrator | Thursday 05 February 2026 03:07:29 +0000 (0:00:04.143) 0:01:21.613 ***** 2026-02-05 03:08:42.035265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 03:08:42.035367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 03:08:42.035422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 03:08:42.035436 | orchestrator | 2026-02-05 03:08:42.035446 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-05 03:08:42.035455 | orchestrator | Thursday 05 February 2026 03:07:32 +0000 (0:00:03.342) 0:01:24.956 ***** 2026-02-05 03:08:42.035463 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:08:42.035473 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:08:42.035481 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:08:42.035489 | orchestrator | 2026-02-05 03:08:42.035497 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-05 03:08:42.035506 | orchestrator | Thursday 05 February 2026 03:07:33 +0000 (0:00:00.476) 0:01:25.433 ***** 2026-02-05 03:08:42.035515 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:08:42.035523 | orchestrator | 2026-02-05 03:08:42.035532 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-05 03:08:42.035540 | orchestrator | Thursday 05 February 2026 03:07:35 +0000 (0:00:02.193) 0:01:27.626 ***** 2026-02-05 03:08:42.035554 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:08:42.035562 | orchestrator | 2026-02-05 03:08:42.035570 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-05 03:08:42.035578 | orchestrator | Thursday 05 February 2026 03:07:37 +0000 (0:00:02.496) 0:01:30.122 ***** 2026-02-05 03:08:42.035586 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:08:42.035595 | orchestrator | 2026-02-05 03:08:42.035603 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-05 03:08:42.035612 | orchestrator | Thursday 05 February 2026 03:07:39 +0000 (0:00:02.128) 0:01:32.251 ***** 2026-02-05 03:08:42.035620 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:08:42.035628 | orchestrator | 2026-02-05 03:08:42.035636 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-05 03:08:42.035644 | orchestrator | Thursday 05 February 2026 03:08:08 +0000 (0:00:29.136) 0:02:01.387 ***** 2026-02-05 03:08:42.035652 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:08:42.035660 | orchestrator | 2026-02-05 03:08:42.035668 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-05 03:08:42.035676 | orchestrator | Thursday 05 February 2026 03:08:11 +0000 (0:00:02.146) 0:02:03.534 ***** 2026-02-05 03:08:42.035685 | orchestrator | 2026-02-05 03:08:42.035693 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-05 03:08:42.035701 | orchestrator | Thursday 05 February 2026 03:08:11 +0000 (0:00:00.070) 0:02:03.605 ***** 2026-02-05 03:08:42.035709 | orchestrator | 2026-02-05 03:08:42.035717 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-05 03:08:42.035725 | orchestrator | Thursday 05 February 2026 03:08:11 +0000 (0:00:00.068) 0:02:03.674 ***** 2026-02-05 03:08:42.035733 | orchestrator | 2026-02-05 03:08:42.035741 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-05 03:08:42.035750 | orchestrator | Thursday 05 February 2026 03:08:11 +0000 (0:00:00.069) 0:02:03.743 ***** 2026-02-05 03:08:42.035758 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:08:42.035766 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:08:42.035775 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:08:42.035783 | orchestrator | 2026-02-05 03:08:42.035791 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:08:42.035801 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 03:08:42.035810 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 03:08:42.035819 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 03:08:42.035828 | orchestrator | 2026-02-05 03:08:42.035836 | orchestrator | 2026-02-05 03:08:42.035845 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:08:42.035853 | orchestrator | Thursday 05 February 2026 03:08:41 +0000 (0:00:30.637) 0:02:34.381 ***** 2026-02-05 03:08:42.035861 | orchestrator | =============================================================================== 2026-02-05 03:08:42.035870 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.64s 2026-02-05 03:08:42.035878 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.14s 2026-02-05 03:08:42.035886 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.60s 2026-02-05 03:08:42.035899 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.38s 2026-02-05 03:08:42.320624 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.14s 2026-02-05 03:08:42.320708 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.13s 2026-02-05 03:08:42.320715 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.90s 2026-02-05 03:08:42.320748 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.73s 2026-02-05 03:08:42.320752 | orchestrator | glance : Copying over config.json files for services -------------------- 3.62s 2026-02-05 03:08:42.320756 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.60s 2026-02-05 03:08:42.320760 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.46s 2026-02-05 03:08:42.320764 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.45s 2026-02-05 03:08:42.320768 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.41s 2026-02-05 03:08:42.320771 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.37s 2026-02-05 03:08:42.320775 | orchestrator | glance : Check glance containers ---------------------------------------- 3.34s 2026-02-05 03:08:42.320779 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.31s 2026-02-05 03:08:42.320783 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.28s 2026-02-05 03:08:42.320787 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.28s 2026-02-05 03:08:42.320790 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.24s 2026-02-05 03:08:42.320794 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.23s 2026-02-05 03:08:44.687229 | orchestrator | 2026-02-05 03:08:44 | INFO  | Task c30d5f65-c23e-460a-bbe2-e88351336e90 (cinder) was prepared for execution. 2026-02-05 03:08:44.687455 | orchestrator | 2026-02-05 03:08:44 | INFO  | It takes a moment until task c30d5f65-c23e-460a-bbe2-e88351336e90 (cinder) has been started and output is visible here. 2026-02-05 03:09:21.532525 | orchestrator | 2026-02-05 03:09:21.532670 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:09:21.532697 | orchestrator | 2026-02-05 03:09:21.532717 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:09:21.532736 | orchestrator | Thursday 05 February 2026 03:08:48 +0000 (0:00:00.257) 0:00:00.257 ***** 2026-02-05 03:09:21.532757 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:09:21.532776 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:09:21.532794 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:09:21.532813 | orchestrator | 2026-02-05 03:09:21.532833 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:09:21.532852 | orchestrator | Thursday 05 February 2026 03:08:49 +0000 (0:00:00.314) 0:00:00.572 ***** 2026-02-05 03:09:21.532872 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-05 03:09:21.532892 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-05 03:09:21.532911 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-05 03:09:21.532930 | orchestrator | 2026-02-05 03:09:21.532949 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-05 03:09:21.532991 | orchestrator | 2026-02-05 03:09:21.533013 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 03:09:21.533036 | orchestrator | Thursday 05 February 2026 03:08:49 +0000 (0:00:00.435) 0:00:01.008 ***** 2026-02-05 03:09:21.533060 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:09:21.533083 | orchestrator | 2026-02-05 03:09:21.533105 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-05 03:09:21.533129 | orchestrator | Thursday 05 February 2026 03:08:50 +0000 (0:00:00.585) 0:00:01.593 ***** 2026-02-05 03:09:21.533153 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-05 03:09:21.533174 | orchestrator | 2026-02-05 03:09:21.533195 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-05 03:09:21.533218 | orchestrator | Thursday 05 February 2026 03:08:54 +0000 (0:00:03.792) 0:00:05.385 ***** 2026-02-05 03:09:21.533258 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-05 03:09:21.533332 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-05 03:09:21.533351 | orchestrator | 2026-02-05 03:09:21.533369 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-05 03:09:21.533391 | orchestrator | Thursday 05 February 2026 03:09:00 +0000 (0:00:06.695) 0:00:12.081 ***** 2026-02-05 03:09:21.533407 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 03:09:21.533423 | orchestrator | 2026-02-05 03:09:21.533439 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-05 03:09:21.533456 | orchestrator | Thursday 05 February 2026 03:09:04 +0000 (0:00:03.519) 0:00:15.601 ***** 2026-02-05 03:09:21.533471 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 03:09:21.533488 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-05 03:09:21.533504 | orchestrator | 2026-02-05 03:09:21.533521 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-05 03:09:21.533536 | orchestrator | Thursday 05 February 2026 03:09:08 +0000 (0:00:04.214) 0:00:19.815 ***** 2026-02-05 03:09:21.533552 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 03:09:21.533569 | orchestrator | 2026-02-05 03:09:21.533584 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-05 03:09:21.533602 | orchestrator | Thursday 05 February 2026 03:09:11 +0000 (0:00:03.309) 0:00:23.125 ***** 2026-02-05 03:09:21.533619 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-05 03:09:21.533635 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-05 03:09:21.533651 | orchestrator | 2026-02-05 03:09:21.533687 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-05 03:09:21.533704 | orchestrator | Thursday 05 February 2026 03:09:19 +0000 (0:00:07.794) 0:00:30.920 ***** 2026-02-05 03:09:21.533725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:09:21.533780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:09:21.533801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:09:21.533836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:21.533854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:21.533879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:21.533898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:21.533928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:27.440009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:27.440107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:27.440134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:27.440149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:27.440163 | orchestrator | 2026-02-05 03:09:27.440177 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 03:09:27.440191 | orchestrator | Thursday 05 February 2026 03:09:21 +0000 (0:00:02.039) 0:00:32.960 ***** 2026-02-05 03:09:27.440204 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:09:27.440220 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:09:27.440235 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:09:27.440248 | orchestrator | 2026-02-05 03:09:27.440259 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 03:09:27.440267 | orchestrator | Thursday 05 February 2026 03:09:21 +0000 (0:00:00.295) 0:00:33.255 ***** 2026-02-05 03:09:27.440276 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:09:27.440284 | orchestrator | 2026-02-05 03:09:27.440292 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-05 03:09:27.440301 | orchestrator | Thursday 05 February 2026 03:09:22 +0000 (0:00:00.741) 0:00:33.996 ***** 2026-02-05 03:09:27.440327 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-05 03:09:27.440337 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-05 03:09:27.440344 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-05 03:09:27.440352 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-05 03:09:27.440360 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-05 03:09:27.440368 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-05 03:09:27.440376 | orchestrator | 2026-02-05 03:09:27.440384 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-05 03:09:27.440392 | orchestrator | Thursday 05 February 2026 03:09:24 +0000 (0:00:01.681) 0:00:35.677 ***** 2026-02-05 03:09:27.440416 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 03:09:27.440427 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 03:09:27.440442 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 03:09:27.440451 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 03:09:27.440471 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 03:09:38.382486 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 03:09:38.382596 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 03:09:38.382628 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 03:09:38.382641 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 03:09:38.382675 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 03:09:38.382704 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 03:09:38.382715 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 03:09:38.382726 | orchestrator | 2026-02-05 03:09:38.382739 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-05 03:09:38.382750 | orchestrator | Thursday 05 February 2026 03:09:27 +0000 (0:00:03.375) 0:00:39.053 ***** 2026-02-05 03:09:38.382760 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 03:09:38.382771 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 03:09:38.382781 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 03:09:38.382790 | orchestrator | 2026-02-05 03:09:38.382800 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-05 03:09:38.382810 | orchestrator | Thursday 05 February 2026 03:09:29 +0000 (0:00:01.792) 0:00:40.846 ***** 2026-02-05 03:09:38.382821 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-05 03:09:38.382836 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-05 03:09:38.382846 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-05 03:09:38.382856 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 03:09:38.382865 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 03:09:38.382875 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 03:09:38.382892 | orchestrator | 2026-02-05 03:09:38.382902 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-05 03:09:38.382912 | orchestrator | Thursday 05 February 2026 03:09:32 +0000 (0:00:02.604) 0:00:43.451 ***** 2026-02-05 03:09:38.382922 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-05 03:09:38.382932 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-05 03:09:38.382941 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-05 03:09:38.382951 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-05 03:09:38.382961 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-05 03:09:38.383004 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-05 03:09:38.383017 | orchestrator | 2026-02-05 03:09:38.383029 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-05 03:09:38.383041 | orchestrator | Thursday 05 February 2026 03:09:33 +0000 (0:00:01.072) 0:00:44.523 ***** 2026-02-05 03:09:38.383053 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:09:38.383065 | orchestrator | 2026-02-05 03:09:38.383076 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-05 03:09:38.383087 | orchestrator | Thursday 05 February 2026 03:09:33 +0000 (0:00:00.137) 0:00:44.661 ***** 2026-02-05 03:09:38.383100 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:09:38.383117 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:09:38.383134 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:09:38.383150 | orchestrator | 2026-02-05 03:09:38.383166 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 03:09:38.383184 | orchestrator | Thursday 05 February 2026 03:09:33 +0000 (0:00:00.508) 0:00:45.169 ***** 2026-02-05 03:09:38.383202 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:09:38.383221 | orchestrator | 2026-02-05 03:09:38.383238 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-05 03:09:38.383251 | orchestrator | Thursday 05 February 2026 03:09:34 +0000 (0:00:00.564) 0:00:45.734 ***** 2026-02-05 03:09:38.383274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:09:39.276589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:09:39.276713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:09:39.276756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:39.276771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:39.276783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:39.276815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:39.276829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:39.276855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:39.276867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:39.276880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:39.276892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:39.276904 | orchestrator | 2026-02-05 03:09:39.276918 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-05 03:09:39.276931 | orchestrator | Thursday 05 February 2026 03:09:38 +0000 (0:00:04.077) 0:00:49.812 ***** 2026-02-05 03:09:39.276952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 03:09:39.371504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.371604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.371621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.371635 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:09:39.371649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 03:09:39.371662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.371694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.371760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.371774 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:09:39.371786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 03:09:39.371798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.371809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.371821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.371861 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:09:39.371873 | orchestrator | 2026-02-05 03:09:39.371885 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-05 03:09:39.371912 | orchestrator | Thursday 05 February 2026 03:09:39 +0000 (0:00:00.895) 0:00:50.708 ***** 2026-02-05 03:09:39.944899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 03:09:39.945022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.945034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.945044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.945052 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:09:39.945061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 03:09:39.945101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.945115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.945122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.945129 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:09:39.945137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 03:09:39.945144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:09:39.945160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 03:09:44.252956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 03:09:44.253107 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:09:44.253123 | orchestrator | 2026-02-05 03:09:44.253133 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-05 03:09:44.253144 | orchestrator | Thursday 05 February 2026 03:09:40 +0000 (0:00:00.892) 0:00:51.601 ***** 2026-02-05 03:09:44.253155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:09:44.253166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:09:44.253196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:09:44.253228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:44.253250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:44.253260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:44.253270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:44.253281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:44.253297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:44.253313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:56.483723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:56.483854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:56.483873 | orchestrator | 2026-02-05 03:09:56.483888 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-05 03:09:56.483901 | orchestrator | Thursday 05 February 2026 03:09:44 +0000 (0:00:04.082) 0:00:55.683 ***** 2026-02-05 03:09:56.483913 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-05 03:09:56.483925 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-05 03:09:56.483936 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-05 03:09:56.483947 | orchestrator | 2026-02-05 03:09:56.483958 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-05 03:09:56.484029 | orchestrator | Thursday 05 February 2026 03:09:46 +0000 (0:00:01.841) 0:00:57.524 ***** 2026-02-05 03:09:56.484072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:09:56.484086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:09:56.484129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:09:56.484144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:56.484156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:56.484176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:56.484188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:56.484201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:56.484227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:58.740924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:58.741140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:58.741191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:09:58.741209 | orchestrator | 2026-02-05 03:09:58.741231 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-05 03:09:58.741252 | orchestrator | Thursday 05 February 2026 03:09:56 +0000 (0:00:10.375) 0:01:07.900 ***** 2026-02-05 03:09:58.741271 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:09:58.741291 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:09:58.741310 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:09:58.741329 | orchestrator | 2026-02-05 03:09:58.741348 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-05 03:09:58.741367 | orchestrator | Thursday 05 February 2026 03:09:58 +0000 (0:00:01.604) 0:01:09.504 ***** 2026-02-05 03:09:58.741388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 03:09:58.741419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:09:58.741452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 03:09:58.741477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 03:09:58.741488 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:09:58.741500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 03:09:58.741512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:09:58.741523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 03:09:58.741548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 03:10:02.298131 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:10:02.298209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 03:10:02.298239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:10:02.298248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 03:10:02.298254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 03:10:02.298268 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:10:02.298298 | orchestrator | 2026-02-05 03:10:02.298306 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-05 03:10:02.298312 | orchestrator | Thursday 05 February 2026 03:09:58 +0000 (0:00:00.668) 0:01:10.172 ***** 2026-02-05 03:10:02.298317 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:10:02.298322 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:10:02.298327 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:10:02.298331 | orchestrator | 2026-02-05 03:10:02.298336 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-05 03:10:02.298341 | orchestrator | Thursday 05 February 2026 03:09:59 +0000 (0:00:00.594) 0:01:10.767 ***** 2026-02-05 03:10:02.298370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:10:02.298382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:10:02.298388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 03:10:02.298393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:10:02.298399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:10:02.298407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:10:02.298420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:11:38.958472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:11:38.958586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 03:11:38.958602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:11:38.958614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:11:38.958642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 03:11:38.958676 | orchestrator | 2026-02-05 03:11:38.958689 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 03:11:38.958700 | orchestrator | Thursday 05 February 2026 03:10:02 +0000 (0:00:02.961) 0:01:13.729 ***** 2026-02-05 03:11:38.958710 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:11:38.958722 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:11:38.958731 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:11:38.958741 | orchestrator | 2026-02-05 03:11:38.958751 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-05 03:11:38.958762 | orchestrator | Thursday 05 February 2026 03:10:02 +0000 (0:00:00.294) 0:01:14.023 ***** 2026-02-05 03:11:38.958772 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:11:38.958781 | orchestrator | 2026-02-05 03:11:38.958808 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-05 03:11:38.958818 | orchestrator | Thursday 05 February 2026 03:10:05 +0000 (0:00:02.331) 0:01:16.355 ***** 2026-02-05 03:11:38.958828 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:11:38.958838 | orchestrator | 2026-02-05 03:11:38.958848 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-05 03:11:38.958857 | orchestrator | Thursday 05 February 2026 03:10:07 +0000 (0:00:02.505) 0:01:18.860 ***** 2026-02-05 03:11:38.958867 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:11:38.958877 | orchestrator | 2026-02-05 03:11:38.958886 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-05 03:11:38.958896 | orchestrator | Thursday 05 February 2026 03:10:28 +0000 (0:00:21.077) 0:01:39.937 ***** 2026-02-05 03:11:38.958906 | orchestrator | 2026-02-05 03:11:38.958915 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-05 03:11:38.958925 | orchestrator | Thursday 05 February 2026 03:10:28 +0000 (0:00:00.079) 0:01:40.017 ***** 2026-02-05 03:11:38.958934 | orchestrator | 2026-02-05 03:11:38.958944 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-05 03:11:38.958954 | orchestrator | Thursday 05 February 2026 03:10:28 +0000 (0:00:00.236) 0:01:40.253 ***** 2026-02-05 03:11:38.958963 | orchestrator | 2026-02-05 03:11:38.958973 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-05 03:11:38.959014 | orchestrator | Thursday 05 February 2026 03:10:28 +0000 (0:00:00.073) 0:01:40.327 ***** 2026-02-05 03:11:38.959033 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:11:38.959046 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:11:38.959058 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:11:38.959069 | orchestrator | 2026-02-05 03:11:38.959084 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-05 03:11:38.959101 | orchestrator | Thursday 05 February 2026 03:10:52 +0000 (0:00:23.473) 0:02:03.801 ***** 2026-02-05 03:11:38.959118 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:11:38.959134 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:11:38.959151 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:11:38.959167 | orchestrator | 2026-02-05 03:11:38.959184 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-05 03:11:38.959201 | orchestrator | Thursday 05 February 2026 03:11:02 +0000 (0:00:10.103) 0:02:13.904 ***** 2026-02-05 03:11:38.959219 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:11:38.959236 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:11:38.959253 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:11:38.959266 | orchestrator | 2026-02-05 03:11:38.959287 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-05 03:11:38.959298 | orchestrator | Thursday 05 February 2026 03:11:27 +0000 (0:00:25.242) 0:02:39.147 ***** 2026-02-05 03:11:38.959309 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:11:38.959321 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:11:38.959332 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:11:38.959344 | orchestrator | 2026-02-05 03:11:38.959355 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-05 03:11:38.959368 | orchestrator | Thursday 05 February 2026 03:11:38 +0000 (0:00:10.840) 0:02:49.988 ***** 2026-02-05 03:11:38.959380 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:11:38.959390 | orchestrator | 2026-02-05 03:11:38.959399 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:11:38.959411 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-05 03:11:38.959428 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 03:11:38.959443 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 03:11:38.959460 | orchestrator | 2026-02-05 03:11:38.959477 | orchestrator | 2026-02-05 03:11:38.959494 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:11:38.959505 | orchestrator | Thursday 05 February 2026 03:11:38 +0000 (0:00:00.295) 0:02:50.283 ***** 2026-02-05 03:11:38.959515 | orchestrator | =============================================================================== 2026-02-05 03:11:38.959531 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 25.24s 2026-02-05 03:11:38.959541 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.47s 2026-02-05 03:11:38.959551 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.08s 2026-02-05 03:11:38.959560 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.84s 2026-02-05 03:11:38.959570 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.38s 2026-02-05 03:11:38.959579 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.10s 2026-02-05 03:11:38.959589 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.79s 2026-02-05 03:11:38.959598 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.70s 2026-02-05 03:11:38.959608 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.21s 2026-02-05 03:11:38.959617 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.08s 2026-02-05 03:11:38.959627 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.08s 2026-02-05 03:11:38.959642 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.79s 2026-02-05 03:11:38.959658 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.52s 2026-02-05 03:11:38.959674 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.38s 2026-02-05 03:11:38.959701 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.31s 2026-02-05 03:11:39.306206 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.96s 2026-02-05 03:11:39.306346 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.60s 2026-02-05 03:11:39.306375 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.51s 2026-02-05 03:11:39.306395 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.33s 2026-02-05 03:11:39.306414 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.04s 2026-02-05 03:11:41.706301 | orchestrator | 2026-02-05 03:11:41 | INFO  | Task 1a4089d5-5bab-4b3b-a8f3-c257e7bc70f4 (barbican) was prepared for execution. 2026-02-05 03:11:41.706500 | orchestrator | 2026-02-05 03:11:41 | INFO  | It takes a moment until task 1a4089d5-5bab-4b3b-a8f3-c257e7bc70f4 (barbican) has been started and output is visible here. 2026-02-05 03:12:26.838326 | orchestrator | 2026-02-05 03:12:26.838437 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:12:26.838451 | orchestrator | 2026-02-05 03:12:26.838460 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:12:26.838468 | orchestrator | Thursday 05 February 2026 03:11:45 +0000 (0:00:00.263) 0:00:00.263 ***** 2026-02-05 03:12:26.838477 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:12:26.838487 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:12:26.838494 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:12:26.838502 | orchestrator | 2026-02-05 03:12:26.838510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:12:26.838519 | orchestrator | Thursday 05 February 2026 03:11:46 +0000 (0:00:00.328) 0:00:00.592 ***** 2026-02-05 03:12:26.838527 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-05 03:12:26.838535 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-05 03:12:26.838543 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-05 03:12:26.838551 | orchestrator | 2026-02-05 03:12:26.838559 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-05 03:12:26.838567 | orchestrator | 2026-02-05 03:12:26.838575 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-05 03:12:26.838583 | orchestrator | Thursday 05 February 2026 03:11:46 +0000 (0:00:00.433) 0:00:01.026 ***** 2026-02-05 03:12:26.838591 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:12:26.838600 | orchestrator | 2026-02-05 03:12:26.838607 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-05 03:12:26.838615 | orchestrator | Thursday 05 February 2026 03:11:47 +0000 (0:00:00.595) 0:00:01.621 ***** 2026-02-05 03:12:26.838624 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-05 03:12:26.838632 | orchestrator | 2026-02-05 03:12:26.838639 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-05 03:12:26.838647 | orchestrator | Thursday 05 February 2026 03:11:50 +0000 (0:00:03.519) 0:00:05.140 ***** 2026-02-05 03:12:26.838655 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-05 03:12:26.838663 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-05 03:12:26.838671 | orchestrator | 2026-02-05 03:12:26.838679 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-05 03:12:26.838687 | orchestrator | Thursday 05 February 2026 03:11:57 +0000 (0:00:06.687) 0:00:11.828 ***** 2026-02-05 03:12:26.838695 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 03:12:26.838703 | orchestrator | 2026-02-05 03:12:26.838711 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-05 03:12:26.838718 | orchestrator | Thursday 05 February 2026 03:12:00 +0000 (0:00:03.456) 0:00:15.284 ***** 2026-02-05 03:12:26.838726 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 03:12:26.838734 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-05 03:12:26.838742 | orchestrator | 2026-02-05 03:12:26.838764 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-05 03:12:26.838773 | orchestrator | Thursday 05 February 2026 03:12:05 +0000 (0:00:04.135) 0:00:19.419 ***** 2026-02-05 03:12:26.838781 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 03:12:26.838789 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-05 03:12:26.838797 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-05 03:12:26.838824 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-05 03:12:26.838833 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-05 03:12:26.838841 | orchestrator | 2026-02-05 03:12:26.838848 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-05 03:12:26.838856 | orchestrator | Thursday 05 February 2026 03:12:21 +0000 (0:00:16.243) 0:00:35.662 ***** 2026-02-05 03:12:26.838864 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-05 03:12:26.838872 | orchestrator | 2026-02-05 03:12:26.838879 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-05 03:12:26.838887 | orchestrator | Thursday 05 February 2026 03:12:25 +0000 (0:00:03.823) 0:00:39.486 ***** 2026-02-05 03:12:26.838898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:26.838924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:26.838934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:26.838948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:26.838964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:26.838972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:26.839008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:32.639896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:32.640192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:32.640231 | orchestrator | 2026-02-05 03:12:32.640252 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-05 03:12:32.640271 | orchestrator | Thursday 05 February 2026 03:12:26 +0000 (0:00:01.684) 0:00:41.170 ***** 2026-02-05 03:12:32.640289 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-05 03:12:32.640306 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-05 03:12:32.640323 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-05 03:12:32.640372 | orchestrator | 2026-02-05 03:12:32.640390 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-05 03:12:32.640434 | orchestrator | Thursday 05 February 2026 03:12:27 +0000 (0:00:00.878) 0:00:42.049 ***** 2026-02-05 03:12:32.640467 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:12:32.640489 | orchestrator | 2026-02-05 03:12:32.640508 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-05 03:12:32.640527 | orchestrator | Thursday 05 February 2026 03:12:28 +0000 (0:00:00.334) 0:00:42.384 ***** 2026-02-05 03:12:32.640566 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:12:32.640585 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:12:32.640603 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:12:32.640621 | orchestrator | 2026-02-05 03:12:32.640638 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-05 03:12:32.640653 | orchestrator | Thursday 05 February 2026 03:12:28 +0000 (0:00:00.320) 0:00:42.705 ***** 2026-02-05 03:12:32.640669 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:12:32.640685 | orchestrator | 2026-02-05 03:12:32.640700 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-05 03:12:32.640716 | orchestrator | Thursday 05 February 2026 03:12:28 +0000 (0:00:00.583) 0:00:43.289 ***** 2026-02-05 03:12:32.640735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:32.640780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:32.640799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:32.640833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:32.640862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:32.640880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:32.640897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:32.640925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:34.083027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:34.083141 | orchestrator | 2026-02-05 03:12:34.083154 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-05 03:12:34.083165 | orchestrator | Thursday 05 February 2026 03:12:32 +0000 (0:00:03.684) 0:00:46.974 ***** 2026-02-05 03:12:34.083191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 03:12:34.083201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:12:34.083210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:12:34.083218 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:12:34.083232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 03:12:34.083256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:12:34.083273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:12:34.083281 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:12:34.083294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 03:12:34.083303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:12:34.083310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:12:34.083319 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:12:34.083326 | orchestrator | 2026-02-05 03:12:34.083333 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-05 03:12:34.083341 | orchestrator | Thursday 05 February 2026 03:12:33 +0000 (0:00:00.634) 0:00:47.608 ***** 2026-02-05 03:12:34.083354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 03:12:37.669858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:12:37.669972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:12:37.670065 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:12:37.670079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 03:12:37.670089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:12:37.670098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:12:37.670128 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:12:37.670154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 03:12:37.670164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:12:37.670178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:12:37.670188 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:12:37.670197 | orchestrator | 2026-02-05 03:12:37.670207 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-05 03:12:37.670217 | orchestrator | Thursday 05 February 2026 03:12:34 +0000 (0:00:00.816) 0:00:48.425 ***** 2026-02-05 03:12:37.670227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:37.670237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:37.670258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:46.956661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:46.956816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:46.956839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:46.956855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:46.956892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:46.956900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:46.956908 | orchestrator | 2026-02-05 03:12:46.956918 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-05 03:12:46.956927 | orchestrator | Thursday 05 February 2026 03:12:37 +0000 (0:00:03.579) 0:00:52.004 ***** 2026-02-05 03:12:46.956935 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:12:46.956944 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:12:46.956952 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:12:46.956959 | orchestrator | 2026-02-05 03:12:46.957042 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-05 03:12:46.957052 | orchestrator | Thursday 05 February 2026 03:12:39 +0000 (0:00:01.522) 0:00:53.527 ***** 2026-02-05 03:12:46.957060 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 03:12:46.957068 | orchestrator | 2026-02-05 03:12:46.957075 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-05 03:12:46.957083 | orchestrator | Thursday 05 February 2026 03:12:40 +0000 (0:00:01.011) 0:00:54.538 ***** 2026-02-05 03:12:46.957090 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:12:46.957098 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:12:46.957105 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:12:46.957112 | orchestrator | 2026-02-05 03:12:46.957120 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-05 03:12:46.957127 | orchestrator | Thursday 05 February 2026 03:12:40 +0000 (0:00:00.571) 0:00:55.110 ***** 2026-02-05 03:12:46.957178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:46.957196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:46.957220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:46.957244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:47.815372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:47.815508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:47.815525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:47.815566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:47.815578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:47.815590 | orchestrator | 2026-02-05 03:12:47.815604 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-05 03:12:47.815617 | orchestrator | Thursday 05 February 2026 03:12:46 +0000 (0:00:06.182) 0:01:01.293 ***** 2026-02-05 03:12:47.815650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 03:12:47.815671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:12:47.815685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:12:47.815710 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:12:47.815723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 03:12:47.815735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:12:47.815747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:12:47.815758 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:12:47.815784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 03:12:50.180970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:12:50.181172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:12:50.181190 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:12:50.181204 | orchestrator | 2026-02-05 03:12:50.181215 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-05 03:12:50.181227 | orchestrator | Thursday 05 February 2026 03:12:47 +0000 (0:00:00.855) 0:01:02.149 ***** 2026-02-05 03:12:50.181237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:50.181249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:50.181293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 03:12:50.181305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:50.181324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:50.181335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:50.181345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:50.181356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:50.181366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:12:50.181376 | orchestrator | 2026-02-05 03:12:50.181391 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-05 03:12:50.181414 | orchestrator | Thursday 05 February 2026 03:12:50 +0000 (0:00:02.370) 0:01:04.519 ***** 2026-02-05 03:13:33.868710 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:13:33.868815 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:13:33.868830 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:13:33.868845 | orchestrator | 2026-02-05 03:13:33.868896 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-05 03:13:33.868914 | orchestrator | Thursday 05 February 2026 03:12:50 +0000 (0:00:00.291) 0:01:04.811 ***** 2026-02-05 03:13:33.868929 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:13:33.868942 | orchestrator | 2026-02-05 03:13:33.868955 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-05 03:13:33.868969 | orchestrator | Thursday 05 February 2026 03:12:52 +0000 (0:00:02.123) 0:01:06.934 ***** 2026-02-05 03:13:33.869095 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:13:33.869111 | orchestrator | 2026-02-05 03:13:33.869125 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-05 03:13:33.869141 | orchestrator | Thursday 05 February 2026 03:12:55 +0000 (0:00:02.472) 0:01:09.407 ***** 2026-02-05 03:13:33.869157 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:13:33.869172 | orchestrator | 2026-02-05 03:13:33.869182 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-05 03:13:33.869191 | orchestrator | Thursday 05 February 2026 03:13:07 +0000 (0:00:12.419) 0:01:21.826 ***** 2026-02-05 03:13:33.869200 | orchestrator | 2026-02-05 03:13:33.869209 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-05 03:13:33.869218 | orchestrator | Thursday 05 February 2026 03:13:07 +0000 (0:00:00.061) 0:01:21.888 ***** 2026-02-05 03:13:33.869227 | orchestrator | 2026-02-05 03:13:33.869236 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-05 03:13:33.869245 | orchestrator | Thursday 05 February 2026 03:13:07 +0000 (0:00:00.187) 0:01:22.075 ***** 2026-02-05 03:13:33.869253 | orchestrator | 2026-02-05 03:13:33.869262 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-05 03:13:33.869271 | orchestrator | Thursday 05 February 2026 03:13:07 +0000 (0:00:00.065) 0:01:22.141 ***** 2026-02-05 03:13:33.869280 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:13:33.869288 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:13:33.869297 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:13:33.869306 | orchestrator | 2026-02-05 03:13:33.869315 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-05 03:13:33.869324 | orchestrator | Thursday 05 February 2026 03:13:18 +0000 (0:00:10.910) 0:01:33.052 ***** 2026-02-05 03:13:33.869333 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:13:33.869342 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:13:33.869350 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:13:33.869359 | orchestrator | 2026-02-05 03:13:33.869368 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-05 03:13:33.869376 | orchestrator | Thursday 05 February 2026 03:13:28 +0000 (0:00:09.651) 0:01:42.703 ***** 2026-02-05 03:13:33.869385 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:13:33.869394 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:13:33.869402 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:13:33.869411 | orchestrator | 2026-02-05 03:13:33.869420 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:13:33.869430 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 03:13:33.869440 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 03:13:33.869449 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 03:13:33.869458 | orchestrator | 2026-02-05 03:13:33.869494 | orchestrator | 2026-02-05 03:13:33.869503 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:13:33.869530 | orchestrator | Thursday 05 February 2026 03:13:33 +0000 (0:00:05.189) 0:01:47.893 ***** 2026-02-05 03:13:33.869539 | orchestrator | =============================================================================== 2026-02-05 03:13:33.869547 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.24s 2026-02-05 03:13:33.869556 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.42s 2026-02-05 03:13:33.869565 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.91s 2026-02-05 03:13:33.869573 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.65s 2026-02-05 03:13:33.869582 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.69s 2026-02-05 03:13:33.869590 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.18s 2026-02-05 03:13:33.869599 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.19s 2026-02-05 03:13:33.869607 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.14s 2026-02-05 03:13:33.869616 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.82s 2026-02-05 03:13:33.869624 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.68s 2026-02-05 03:13:33.869633 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.58s 2026-02-05 03:13:33.869642 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.52s 2026-02-05 03:13:33.869651 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.46s 2026-02-05 03:13:33.869659 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.47s 2026-02-05 03:13:33.869682 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.37s 2026-02-05 03:13:33.869711 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.12s 2026-02-05 03:13:33.869720 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.68s 2026-02-05 03:13:33.869729 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.52s 2026-02-05 03:13:33.869737 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.01s 2026-02-05 03:13:33.869746 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 0.88s 2026-02-05 03:13:36.163639 | orchestrator | 2026-02-05 03:13:36 | INFO  | Task 4e5146f7-ae54-4b24-bb4b-dd5dc82ad9b2 (designate) was prepared for execution. 2026-02-05 03:13:36.163743 | orchestrator | 2026-02-05 03:13:36 | INFO  | It takes a moment until task 4e5146f7-ae54-4b24-bb4b-dd5dc82ad9b2 (designate) has been started and output is visible here. 2026-02-05 03:14:08.697443 | orchestrator | 2026-02-05 03:14:08.697578 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:14:08.697596 | orchestrator | 2026-02-05 03:14:08.697609 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:14:08.697621 | orchestrator | Thursday 05 February 2026 03:13:40 +0000 (0:00:00.268) 0:00:00.268 ***** 2026-02-05 03:14:08.698681 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:14:08.698774 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:14:08.698793 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:14:08.698805 | orchestrator | 2026-02-05 03:14:08.698816 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:14:08.698828 | orchestrator | Thursday 05 February 2026 03:13:40 +0000 (0:00:00.336) 0:00:00.605 ***** 2026-02-05 03:14:08.698838 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-05 03:14:08.698849 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-05 03:14:08.698859 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-05 03:14:08.698869 | orchestrator | 2026-02-05 03:14:08.698879 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-05 03:14:08.698917 | orchestrator | 2026-02-05 03:14:08.698928 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-05 03:14:08.698937 | orchestrator | Thursday 05 February 2026 03:13:41 +0000 (0:00:00.447) 0:00:01.052 ***** 2026-02-05 03:14:08.698948 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:14:08.698958 | orchestrator | 2026-02-05 03:14:08.698968 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-05 03:14:08.699005 | orchestrator | Thursday 05 February 2026 03:13:41 +0000 (0:00:00.539) 0:00:01.592 ***** 2026-02-05 03:14:08.699016 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-05 03:14:08.699026 | orchestrator | 2026-02-05 03:14:08.699036 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-05 03:14:08.699046 | orchestrator | Thursday 05 February 2026 03:13:45 +0000 (0:00:03.657) 0:00:05.249 ***** 2026-02-05 03:14:08.699056 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-05 03:14:08.699066 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-05 03:14:08.699076 | orchestrator | 2026-02-05 03:14:08.699086 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-05 03:14:08.699109 | orchestrator | Thursday 05 February 2026 03:13:52 +0000 (0:00:06.752) 0:00:12.001 ***** 2026-02-05 03:14:08.699136 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 03:14:08.699158 | orchestrator | 2026-02-05 03:14:08.699181 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-05 03:14:08.699197 | orchestrator | Thursday 05 February 2026 03:13:55 +0000 (0:00:03.379) 0:00:15.380 ***** 2026-02-05 03:14:08.699215 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 03:14:08.699232 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-05 03:14:08.699249 | orchestrator | 2026-02-05 03:14:08.699265 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-05 03:14:08.699281 | orchestrator | Thursday 05 February 2026 03:13:59 +0000 (0:00:04.071) 0:00:19.452 ***** 2026-02-05 03:14:08.699292 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 03:14:08.699302 | orchestrator | 2026-02-05 03:14:08.699312 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-05 03:14:08.699321 | orchestrator | Thursday 05 February 2026 03:14:02 +0000 (0:00:03.331) 0:00:22.784 ***** 2026-02-05 03:14:08.699331 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-05 03:14:08.699340 | orchestrator | 2026-02-05 03:14:08.699350 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-05 03:14:08.699360 | orchestrator | Thursday 05 February 2026 03:14:06 +0000 (0:00:03.839) 0:00:26.624 ***** 2026-02-05 03:14:08.699391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:08.699435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:08.699457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:08.699468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:08.699480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:08.699496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:08.699507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:08.699532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:14.990791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:14.990895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:14.990908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:14.990915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:14.990935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:14.990968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:14.991020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:14.991031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:14.991042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:14.991050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:14.991060 | orchestrator | 2026-02-05 03:14:14.991071 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-05 03:14:14.991083 | orchestrator | Thursday 05 February 2026 03:14:09 +0000 (0:00:02.829) 0:00:29.453 ***** 2026-02-05 03:14:14.991093 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:14:14.991105 | orchestrator | 2026-02-05 03:14:14.991115 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-05 03:14:14.991125 | orchestrator | Thursday 05 February 2026 03:14:09 +0000 (0:00:00.142) 0:00:29.596 ***** 2026-02-05 03:14:14.991135 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:14:14.991145 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:14:14.991154 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:14:14.991163 | orchestrator | 2026-02-05 03:14:14.991173 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-05 03:14:14.991192 | orchestrator | Thursday 05 February 2026 03:14:10 +0000 (0:00:00.501) 0:00:30.097 ***** 2026-02-05 03:14:14.991202 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:14:14.991212 | orchestrator | 2026-02-05 03:14:14.991222 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-05 03:14:14.991237 | orchestrator | Thursday 05 February 2026 03:14:10 +0000 (0:00:00.584) 0:00:30.681 ***** 2026-02-05 03:14:14.991249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:14.991270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:16.829643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:16.829752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:16.829768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:16.829818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:16.829830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:16.829859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:16.829870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:16.829880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:16.829892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:16.829924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:16.829950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:16.830090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:16.830131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:17.702502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:17.702634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:17.702697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:17.702721 | orchestrator | 2026-02-05 03:14:17.702744 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-05 03:14:17.702764 | orchestrator | Thursday 05 February 2026 03:14:16 +0000 (0:00:06.069) 0:00:36.751 ***** 2026-02-05 03:14:17.702804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:17.702826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 03:14:17.702871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:17.702890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:17.702907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:17.702950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:14:17.702964 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:14:17.703018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:17.703044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 03:14:17.703065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:17.703094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:18.784624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:18.784763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:14:18.784780 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:14:18.784811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:18.784824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 03:14:18.784838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:18.784849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:18.784886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:18.784899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:14:18.784911 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:14:18.784922 | orchestrator | 2026-02-05 03:14:18.784935 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-05 03:14:18.784948 | orchestrator | Thursday 05 February 2026 03:14:17 +0000 (0:00:01.167) 0:00:37.918 ***** 2026-02-05 03:14:18.784964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:18.785042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 03:14:18.785055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:18.785075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:19.135190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:19.135288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:14:19.135302 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:14:19.135329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:19.135337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 03:14:19.135345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:19.135352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:19.135392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:19.135406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:14:19.135413 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:14:19.135424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:19.135431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 03:14:19.135438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:19.135450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:19.135462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:23.182678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:14:23.182789 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:14:23.182808 | orchestrator | 2026-02-05 03:14:23.182821 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-05 03:14:23.182833 | orchestrator | Thursday 05 February 2026 03:14:19 +0000 (0:00:01.139) 0:00:39.057 ***** 2026-02-05 03:14:23.182861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:23.182875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:23.182887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:23.182942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:23.182958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:23.183043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:23.183057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:23.183069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:23.183089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:23.183101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:23.183122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:34.613828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:34.614007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:34.614112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:34.614133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:34.614169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:34.614181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:34.614212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:34.614224 | orchestrator | 2026-02-05 03:14:34.614236 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-05 03:14:34.614247 | orchestrator | Thursday 05 February 2026 03:14:25 +0000 (0:00:05.954) 0:00:45.012 ***** 2026-02-05 03:14:34.614265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:34.614282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:34.614310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:34.614328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:34.614357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:42.763730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:42.763861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:42.763884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:42.763920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:42.763933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:42.763946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:42.764018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:42.764040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:42.764052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:42.764072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:42.764084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:42.764095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:42.764106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:42.764119 | orchestrator | 2026-02-05 03:14:42.764132 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-05 03:14:42.764144 | orchestrator | Thursday 05 February 2026 03:14:39 +0000 (0:00:14.077) 0:00:59.089 ***** 2026-02-05 03:14:42.764165 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-05 03:14:46.726065 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-05 03:14:46.726164 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-05 03:14:46.726175 | orchestrator | 2026-02-05 03:14:46.726183 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-05 03:14:46.726191 | orchestrator | Thursday 05 February 2026 03:14:42 +0000 (0:00:03.594) 0:01:02.684 ***** 2026-02-05 03:14:46.726197 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-05 03:14:46.726217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-05 03:14:46.726227 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-05 03:14:46.726261 | orchestrator | 2026-02-05 03:14:46.726274 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-05 03:14:46.726285 | orchestrator | Thursday 05 February 2026 03:14:45 +0000 (0:00:02.298) 0:01:04.982 ***** 2026-02-05 03:14:46.726299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:46.726314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:46.726322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:46.726343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:46.726356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:46.726369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:46.726377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:46.726384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:46.726390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:46.726397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:46.726410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:49.394452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:49.394567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:49.394584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:49.394598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:49.394610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:49.394623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:49.394663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:49.394766 | orchestrator | 2026-02-05 03:14:49.394784 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-05 03:14:49.394798 | orchestrator | Thursday 05 February 2026 03:14:47 +0000 (0:00:02.735) 0:01:07.717 ***** 2026-02-05 03:14:49.394810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:49.394824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:49.394837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:49.394849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:49.394881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:50.345529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:50.345634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:50.345651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:50.345665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:50.345677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:50.345688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:50.345761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:50.345775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:50.345788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:50.345800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:50.345811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:50.345823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:50.345842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:50.345854 | orchestrator | 2026-02-05 03:14:50.345888 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-05 03:14:50.345914 | orchestrator | Thursday 05 February 2026 03:14:50 +0000 (0:00:02.545) 0:01:10.263 ***** 2026-02-05 03:14:51.194558 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:14:51.194638 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:14:51.194650 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:14:51.194659 | orchestrator | 2026-02-05 03:14:51.194668 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-05 03:14:51.194678 | orchestrator | Thursday 05 February 2026 03:14:50 +0000 (0:00:00.264) 0:01:10.527 ***** 2026-02-05 03:14:51.194689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:51.194702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 03:14:51.194712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:51.194723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:51.194753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:51.194803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:14:51.194809 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:14:51.194815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:51.194820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 03:14:51.194825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:51.194830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:51.194839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:51.194851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:14:54.366451 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:14:54.366587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 03:14:54.366720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 03:14:54.366745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 03:14:54.366804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 03:14:54.366825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 03:14:54.366889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:14:54.366914 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:14:54.366933 | orchestrator | 2026-02-05 03:14:54.367008 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-05 03:14:54.367032 | orchestrator | Thursday 05 February 2026 03:14:51 +0000 (0:00:00.684) 0:01:11.212 ***** 2026-02-05 03:14:54.367052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:54.367075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:54.367096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 03:14:54.367131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:54.367173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:14:56.212274 | orchestrator | 2026-02-05 03:14:56.212281 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-05 03:14:56.212292 | orchestrator | Thursday 05 February 2026 03:14:55 +0000 (0:00:04.384) 0:01:15.597 ***** 2026-02-05 03:14:56.212298 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:14:56.212309 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:16:28.010340 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:16:28.010448 | orchestrator | 2026-02-05 03:16:28.010464 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-05 03:16:28.010476 | orchestrator | Thursday 05 February 2026 03:14:56 +0000 (0:00:00.538) 0:01:16.136 ***** 2026-02-05 03:16:28.010486 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-05 03:16:28.010500 | orchestrator | 2026-02-05 03:16:28.010517 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-05 03:16:28.010532 | orchestrator | Thursday 05 February 2026 03:14:58 +0000 (0:00:02.370) 0:01:18.506 ***** 2026-02-05 03:16:28.010550 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 03:16:28.010566 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-05 03:16:28.010584 | orchestrator | 2026-02-05 03:16:28.010602 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-05 03:16:28.010613 | orchestrator | Thursday 05 February 2026 03:15:00 +0000 (0:00:02.425) 0:01:20.932 ***** 2026-02-05 03:16:28.010623 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:16:28.010633 | orchestrator | 2026-02-05 03:16:28.010643 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-05 03:16:28.010677 | orchestrator | Thursday 05 February 2026 03:15:17 +0000 (0:00:16.454) 0:01:37.386 ***** 2026-02-05 03:16:28.010688 | orchestrator | 2026-02-05 03:16:28.010698 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-05 03:16:28.010708 | orchestrator | Thursday 05 February 2026 03:15:17 +0000 (0:00:00.091) 0:01:37.477 ***** 2026-02-05 03:16:28.010717 | orchestrator | 2026-02-05 03:16:28.010727 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-05 03:16:28.010736 | orchestrator | Thursday 05 February 2026 03:15:17 +0000 (0:00:00.085) 0:01:37.562 ***** 2026-02-05 03:16:28.010747 | orchestrator | 2026-02-05 03:16:28.010757 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-05 03:16:28.010782 | orchestrator | Thursday 05 February 2026 03:15:17 +0000 (0:00:00.073) 0:01:37.635 ***** 2026-02-05 03:16:28.010793 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:16:28.010803 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:16:28.010823 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:16:28.010832 | orchestrator | 2026-02-05 03:16:28.010842 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-05 03:16:28.010852 | orchestrator | Thursday 05 February 2026 03:15:26 +0000 (0:00:08.768) 0:01:46.404 ***** 2026-02-05 03:16:28.010862 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:16:28.010872 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:16:28.010882 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:16:28.010895 | orchestrator | 2026-02-05 03:16:28.010906 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-05 03:16:28.010918 | orchestrator | Thursday 05 February 2026 03:15:37 +0000 (0:00:10.654) 0:01:57.059 ***** 2026-02-05 03:16:28.010929 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:16:28.010941 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:16:28.010954 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:16:28.010991 | orchestrator | 2026-02-05 03:16:28.011007 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-05 03:16:28.011019 | orchestrator | Thursday 05 February 2026 03:15:47 +0000 (0:00:10.598) 0:02:07.658 ***** 2026-02-05 03:16:28.011031 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:16:28.011043 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:16:28.011054 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:16:28.011065 | orchestrator | 2026-02-05 03:16:28.011077 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-05 03:16:28.011089 | orchestrator | Thursday 05 February 2026 03:15:58 +0000 (0:00:10.674) 0:02:18.332 ***** 2026-02-05 03:16:28.011101 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:16:28.011112 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:16:28.011124 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:16:28.011135 | orchestrator | 2026-02-05 03:16:28.011146 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-05 03:16:28.011158 | orchestrator | Thursday 05 February 2026 03:16:09 +0000 (0:00:10.643) 0:02:28.976 ***** 2026-02-05 03:16:28.011169 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:16:28.011181 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:16:28.011193 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:16:28.011204 | orchestrator | 2026-02-05 03:16:28.011216 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-05 03:16:28.011227 | orchestrator | Thursday 05 February 2026 03:16:20 +0000 (0:00:11.049) 0:02:40.026 ***** 2026-02-05 03:16:28.011239 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:16:28.011249 | orchestrator | 2026-02-05 03:16:28.011259 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:16:28.011270 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 03:16:28.011281 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 03:16:28.011299 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 03:16:28.011308 | orchestrator | 2026-02-05 03:16:28.011318 | orchestrator | 2026-02-05 03:16:28.011328 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:16:28.011338 | orchestrator | Thursday 05 February 2026 03:16:27 +0000 (0:00:07.562) 0:02:47.588 ***** 2026-02-05 03:16:28.011347 | orchestrator | =============================================================================== 2026-02-05 03:16:28.011371 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.45s 2026-02-05 03:16:28.011382 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.08s 2026-02-05 03:16:28.011408 | orchestrator | designate : Restart designate-worker container ------------------------- 11.05s 2026-02-05 03:16:28.011419 | orchestrator | designate : Restart designate-producer container ----------------------- 10.67s 2026-02-05 03:16:28.011429 | orchestrator | designate : Restart designate-api container ---------------------------- 10.65s 2026-02-05 03:16:28.011438 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.64s 2026-02-05 03:16:28.011448 | orchestrator | designate : Restart designate-central container ------------------------ 10.60s 2026-02-05 03:16:28.011458 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.77s 2026-02-05 03:16:28.011467 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.56s 2026-02-05 03:16:28.011477 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.75s 2026-02-05 03:16:28.011487 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.07s 2026-02-05 03:16:28.011497 | orchestrator | designate : Copying over config.json files for services ----------------- 5.95s 2026-02-05 03:16:28.011506 | orchestrator | designate : Check designate containers ---------------------------------- 4.38s 2026-02-05 03:16:28.011516 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.07s 2026-02-05 03:16:28.011526 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.84s 2026-02-05 03:16:28.011536 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.66s 2026-02-05 03:16:28.011545 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.59s 2026-02-05 03:16:28.011555 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.38s 2026-02-05 03:16:28.011565 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.33s 2026-02-05 03:16:28.011575 | orchestrator | designate : Ensuring config directories exist --------------------------- 2.83s 2026-02-05 03:16:30.368749 | orchestrator | 2026-02-05 03:16:30 | INFO  | Task 530ebb7a-4f79-45e4-8974-96cecb99abeb (octavia) was prepared for execution. 2026-02-05 03:16:30.368882 | orchestrator | 2026-02-05 03:16:30 | INFO  | It takes a moment until task 530ebb7a-4f79-45e4-8974-96cecb99abeb (octavia) has been started and output is visible here. 2026-02-05 03:18:38.872120 | orchestrator | 2026-02-05 03:18:38.872250 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:18:38.872272 | orchestrator | 2026-02-05 03:18:38.872286 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:18:38.872296 | orchestrator | Thursday 05 February 2026 03:16:34 +0000 (0:00:00.253) 0:00:00.253 ***** 2026-02-05 03:18:38.872303 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:18:38.872312 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:18:38.872319 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:18:38.872327 | orchestrator | 2026-02-05 03:18:38.872335 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:18:38.872342 | orchestrator | Thursday 05 February 2026 03:16:34 +0000 (0:00:00.301) 0:00:00.555 ***** 2026-02-05 03:18:38.872350 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-05 03:18:38.872445 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-05 03:18:38.872458 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-05 03:18:38.872471 | orchestrator | 2026-02-05 03:18:38.872484 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-05 03:18:38.872497 | orchestrator | 2026-02-05 03:18:38.872509 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 03:18:38.872521 | orchestrator | Thursday 05 February 2026 03:16:35 +0000 (0:00:00.433) 0:00:00.988 ***** 2026-02-05 03:18:38.872534 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:18:38.872548 | orchestrator | 2026-02-05 03:18:38.872560 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-05 03:18:38.872572 | orchestrator | Thursday 05 February 2026 03:16:35 +0000 (0:00:00.544) 0:00:01.533 ***** 2026-02-05 03:18:38.872587 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-05 03:18:38.872600 | orchestrator | 2026-02-05 03:18:38.872613 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-05 03:18:38.872626 | orchestrator | Thursday 05 February 2026 03:16:39 +0000 (0:00:03.577) 0:00:05.110 ***** 2026-02-05 03:18:38.872639 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-05 03:18:38.872654 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-05 03:18:38.872669 | orchestrator | 2026-02-05 03:18:38.872684 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-05 03:18:38.872697 | orchestrator | Thursday 05 February 2026 03:16:46 +0000 (0:00:06.771) 0:00:11.882 ***** 2026-02-05 03:18:38.872710 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 03:18:38.872723 | orchestrator | 2026-02-05 03:18:38.872737 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-05 03:18:38.872751 | orchestrator | Thursday 05 February 2026 03:16:49 +0000 (0:00:03.205) 0:00:15.088 ***** 2026-02-05 03:18:38.872766 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 03:18:38.872780 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-05 03:18:38.872791 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-05 03:18:38.872800 | orchestrator | 2026-02-05 03:18:38.872823 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-05 03:18:38.872832 | orchestrator | Thursday 05 February 2026 03:16:57 +0000 (0:00:08.510) 0:00:23.598 ***** 2026-02-05 03:18:38.872841 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 03:18:38.872850 | orchestrator | 2026-02-05 03:18:38.872858 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-05 03:18:38.872866 | orchestrator | Thursday 05 February 2026 03:17:01 +0000 (0:00:03.343) 0:00:26.942 ***** 2026-02-05 03:18:38.872875 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-05 03:18:38.872883 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-05 03:18:38.872892 | orchestrator | 2026-02-05 03:18:38.872901 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-05 03:18:38.872909 | orchestrator | Thursday 05 February 2026 03:17:08 +0000 (0:00:07.352) 0:00:34.294 ***** 2026-02-05 03:18:38.872917 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-05 03:18:38.872925 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-05 03:18:38.872933 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-05 03:18:38.872976 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-05 03:18:38.872985 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-05 03:18:38.872993 | orchestrator | 2026-02-05 03:18:38.873001 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 03:18:38.873020 | orchestrator | Thursday 05 February 2026 03:17:24 +0000 (0:00:15.719) 0:00:50.013 ***** 2026-02-05 03:18:38.873029 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:18:38.873036 | orchestrator | 2026-02-05 03:18:38.873044 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-05 03:18:38.873051 | orchestrator | Thursday 05 February 2026 03:17:24 +0000 (0:00:00.764) 0:00:50.778 ***** 2026-02-05 03:18:38.873058 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:18:38.873066 | orchestrator | 2026-02-05 03:18:38.873073 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-05 03:18:38.873080 | orchestrator | Thursday 05 February 2026 03:17:29 +0000 (0:00:04.997) 0:00:55.775 ***** 2026-02-05 03:18:38.873087 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:18:38.873095 | orchestrator | 2026-02-05 03:18:38.873102 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-05 03:18:38.873128 | orchestrator | Thursday 05 February 2026 03:17:34 +0000 (0:00:04.023) 0:00:59.799 ***** 2026-02-05 03:18:38.873136 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:18:38.873143 | orchestrator | 2026-02-05 03:18:38.873151 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-05 03:18:38.873158 | orchestrator | Thursday 05 February 2026 03:17:37 +0000 (0:00:03.329) 0:01:03.128 ***** 2026-02-05 03:18:38.873165 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-05 03:18:38.873173 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-05 03:18:38.873180 | orchestrator | 2026-02-05 03:18:38.873187 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-05 03:18:38.873194 | orchestrator | Thursday 05 February 2026 03:17:48 +0000 (0:00:10.794) 0:01:13.923 ***** 2026-02-05 03:18:38.873202 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-05 03:18:38.873209 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-05 03:18:38.873219 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-05 03:18:38.873231 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-05 03:18:38.873251 | orchestrator | 2026-02-05 03:18:38.873268 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-05 03:18:38.873281 | orchestrator | Thursday 05 February 2026 03:18:04 +0000 (0:00:16.049) 0:01:29.973 ***** 2026-02-05 03:18:38.873293 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:18:38.873305 | orchestrator | 2026-02-05 03:18:38.873318 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-05 03:18:38.873331 | orchestrator | Thursday 05 February 2026 03:18:09 +0000 (0:00:04.884) 0:01:34.857 ***** 2026-02-05 03:18:38.873344 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:18:38.873358 | orchestrator | 2026-02-05 03:18:38.873370 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-05 03:18:38.873382 | orchestrator | Thursday 05 February 2026 03:18:14 +0000 (0:00:05.520) 0:01:40.377 ***** 2026-02-05 03:18:38.873390 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:18:38.873398 | orchestrator | 2026-02-05 03:18:38.873405 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-05 03:18:38.873412 | orchestrator | Thursday 05 February 2026 03:18:14 +0000 (0:00:00.274) 0:01:40.652 ***** 2026-02-05 03:18:38.873420 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:18:38.873427 | orchestrator | 2026-02-05 03:18:38.873434 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 03:18:38.873441 | orchestrator | Thursday 05 February 2026 03:18:19 +0000 (0:00:04.716) 0:01:45.369 ***** 2026-02-05 03:18:38.873456 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:18:38.873464 | orchestrator | 2026-02-05 03:18:38.873471 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-05 03:18:38.873484 | orchestrator | Thursday 05 February 2026 03:18:20 +0000 (0:00:01.095) 0:01:46.464 ***** 2026-02-05 03:18:38.873492 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:18:38.873499 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:18:38.873507 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:18:38.873514 | orchestrator | 2026-02-05 03:18:38.873521 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-05 03:18:38.873528 | orchestrator | Thursday 05 February 2026 03:18:26 +0000 (0:00:05.824) 0:01:52.288 ***** 2026-02-05 03:18:38.873535 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:18:38.873542 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:18:38.873550 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:18:38.873557 | orchestrator | 2026-02-05 03:18:38.873564 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-05 03:18:38.873571 | orchestrator | Thursday 05 February 2026 03:18:31 +0000 (0:00:04.919) 0:01:57.208 ***** 2026-02-05 03:18:38.873578 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:18:38.873585 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:18:38.873593 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:18:38.873600 | orchestrator | 2026-02-05 03:18:38.873607 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-05 03:18:38.873614 | orchestrator | Thursday 05 February 2026 03:18:32 +0000 (0:00:01.003) 0:01:58.211 ***** 2026-02-05 03:18:38.873621 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:18:38.873629 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:18:38.873636 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:18:38.873643 | orchestrator | 2026-02-05 03:18:38.873650 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-05 03:18:38.873657 | orchestrator | Thursday 05 February 2026 03:18:34 +0000 (0:00:01.812) 0:02:00.023 ***** 2026-02-05 03:18:38.873665 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:18:38.873672 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:18:38.873679 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:18:38.873686 | orchestrator | 2026-02-05 03:18:38.873693 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-05 03:18:38.873701 | orchestrator | Thursday 05 February 2026 03:18:35 +0000 (0:00:01.277) 0:02:01.301 ***** 2026-02-05 03:18:38.873708 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:18:38.873715 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:18:38.873722 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:18:38.873729 | orchestrator | 2026-02-05 03:18:38.873736 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-05 03:18:38.873744 | orchestrator | Thursday 05 February 2026 03:18:36 +0000 (0:00:01.174) 0:02:02.475 ***** 2026-02-05 03:18:38.873751 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:18:38.873758 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:18:38.873765 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:18:38.873773 | orchestrator | 2026-02-05 03:18:38.873787 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-05 03:19:05.003152 | orchestrator | Thursday 05 February 2026 03:18:38 +0000 (0:00:02.156) 0:02:04.632 ***** 2026-02-05 03:19:05.003279 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:19:05.003293 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:19:05.003302 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:19:05.003310 | orchestrator | 2026-02-05 03:19:05.003318 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-05 03:19:05.003326 | orchestrator | Thursday 05 February 2026 03:18:40 +0000 (0:00:01.472) 0:02:06.104 ***** 2026-02-05 03:19:05.003333 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:19:05.003342 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:19:05.003368 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:19:05.003376 | orchestrator | 2026-02-05 03:19:05.003384 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-05 03:19:05.003391 | orchestrator | Thursday 05 February 2026 03:18:40 +0000 (0:00:00.655) 0:02:06.760 ***** 2026-02-05 03:19:05.003398 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:19:05.003406 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:19:05.003413 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:19:05.003420 | orchestrator | 2026-02-05 03:19:05.003428 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 03:19:05.003435 | orchestrator | Thursday 05 February 2026 03:18:43 +0000 (0:00:02.887) 0:02:09.647 ***** 2026-02-05 03:19:05.003443 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:19:05.003450 | orchestrator | 2026-02-05 03:19:05.003458 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-05 03:19:05.003465 | orchestrator | Thursday 05 February 2026 03:18:44 +0000 (0:00:00.708) 0:02:10.355 ***** 2026-02-05 03:19:05.003472 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:19:05.003479 | orchestrator | 2026-02-05 03:19:05.003486 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-05 03:19:05.003494 | orchestrator | Thursday 05 February 2026 03:18:48 +0000 (0:00:03.986) 0:02:14.342 ***** 2026-02-05 03:19:05.003501 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:19:05.003508 | orchestrator | 2026-02-05 03:19:05.003516 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-05 03:19:05.003523 | orchestrator | Thursday 05 February 2026 03:18:51 +0000 (0:00:03.319) 0:02:17.661 ***** 2026-02-05 03:19:05.003530 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-05 03:19:05.003538 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-05 03:19:05.003546 | orchestrator | 2026-02-05 03:19:05.003553 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-05 03:19:05.003560 | orchestrator | Thursday 05 February 2026 03:18:59 +0000 (0:00:07.172) 0:02:24.834 ***** 2026-02-05 03:19:05.003567 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:19:05.003575 | orchestrator | 2026-02-05 03:19:05.003582 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-05 03:19:05.003589 | orchestrator | Thursday 05 February 2026 03:19:02 +0000 (0:00:03.492) 0:02:28.326 ***** 2026-02-05 03:19:05.003597 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:19:05.003604 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:19:05.003611 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:19:05.003618 | orchestrator | 2026-02-05 03:19:05.003637 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-05 03:19:05.003644 | orchestrator | Thursday 05 February 2026 03:19:02 +0000 (0:00:00.322) 0:02:28.648 ***** 2026-02-05 03:19:05.003654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:05.003680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:05.003694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:05.003703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:05.003713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:05.003726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:05.003736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:05.003752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:05.003766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:06.486534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:06.486634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:06.486664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:06.486677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:19:06.486688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:19:06.486721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:19:06.486733 | orchestrator | 2026-02-05 03:19:06.486745 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-05 03:19:06.486756 | orchestrator | Thursday 05 February 2026 03:19:05 +0000 (0:00:02.590) 0:02:31.239 ***** 2026-02-05 03:19:06.486766 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:19:06.486777 | orchestrator | 2026-02-05 03:19:06.486787 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-05 03:19:06.486797 | orchestrator | Thursday 05 February 2026 03:19:05 +0000 (0:00:00.167) 0:02:31.407 ***** 2026-02-05 03:19:06.486806 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:19:06.486831 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:19:06.486842 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:19:06.486852 | orchestrator | 2026-02-05 03:19:06.486862 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-05 03:19:06.486871 | orchestrator | Thursday 05 February 2026 03:19:05 +0000 (0:00:00.284) 0:02:31.692 ***** 2026-02-05 03:19:06.486883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 03:19:06.486895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 03:19:06.486911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 03:19:06.486930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 03:19:06.486987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:19:06.486997 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:19:06.487017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 03:19:11.304870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 03:19:11.304975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 03:19:11.304994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 03:19:11.305016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:19:11.305021 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:19:11.305026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 03:19:11.305031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 03:19:11.305045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 03:19:11.305050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 03:19:11.305056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:19:11.305064 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:19:11.305068 | orchestrator | 2026-02-05 03:19:11.305073 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 03:19:11.305078 | orchestrator | Thursday 05 February 2026 03:19:06 +0000 (0:00:00.667) 0:02:32.359 ***** 2026-02-05 03:19:11.305083 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:19:11.305087 | orchestrator | 2026-02-05 03:19:11.305090 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-05 03:19:11.305094 | orchestrator | Thursday 05 February 2026 03:19:07 +0000 (0:00:00.732) 0:02:33.092 ***** 2026-02-05 03:19:11.305099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:11.305104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:11.305111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:12.802468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:12.802595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:12.802611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:12.802624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:12.802637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:12.802649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:12.802678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:12.802704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:12.802716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:12.802728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:19:12.802740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:19:12.802751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:19:12.802763 | orchestrator | 2026-02-05 03:19:12.802777 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-05 03:19:12.802790 | orchestrator | Thursday 05 February 2026 03:19:12 +0000 (0:00:04.894) 0:02:37.986 ***** 2026-02-05 03:19:12.802811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 03:19:12.899533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 03:19:12.899622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 03:19:12.899636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 03:19:12.899649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:19:12.899661 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:19:12.899675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 03:19:12.899688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 03:19:12.899745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 03:19:12.899760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 03:19:12.899771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:19:12.899782 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:19:12.899794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 03:19:12.899806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 03:19:12.899818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 03:19:12.899845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 03:19:13.495706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:19:13.495810 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:19:13.495833 | orchestrator | 2026-02-05 03:19:13.495848 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-05 03:19:13.495859 | orchestrator | Thursday 05 February 2026 03:19:12 +0000 (0:00:00.682) 0:02:38.668 ***** 2026-02-05 03:19:13.495870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 03:19:13.495881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 03:19:13.495891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 03:19:13.495923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 03:19:13.496021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:19:13.496038 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:19:13.496047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 03:19:13.496056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 03:19:13.496065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 03:19:13.496074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 03:19:13.496093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:19:13.496102 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:19:13.496122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 03:19:17.946052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 03:19:17.946195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 03:19:17.946221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 03:19:17.946238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 03:19:17.946281 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:19:17.946298 | orchestrator | 2026-02-05 03:19:17.946316 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-05 03:19:17.946326 | orchestrator | Thursday 05 February 2026 03:19:13 +0000 (0:00:01.061) 0:02:39.730 ***** 2026-02-05 03:19:17.946335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:17.946374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:17.946384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:17.946393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:17.946411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:17.946425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:17.946440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:17.946485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:33.702918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:33.703125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:33.703151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:33.703202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:33.703221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:19:33.703257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:19:33.703332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:19:33.703346 | orchestrator | 2026-02-05 03:19:33.703359 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-05 03:19:33.703371 | orchestrator | Thursday 05 February 2026 03:19:18 +0000 (0:00:04.933) 0:02:44.664 ***** 2026-02-05 03:19:33.703381 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-05 03:19:33.703392 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-05 03:19:33.703402 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-05 03:19:33.703412 | orchestrator | 2026-02-05 03:19:33.703422 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-05 03:19:33.703432 | orchestrator | Thursday 05 February 2026 03:19:20 +0000 (0:00:01.837) 0:02:46.501 ***** 2026-02-05 03:19:33.703443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:33.703465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:33.703478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:33.703504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:48.741501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:48.741693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:48.741741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:48.741757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:48.741769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:48.741782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:48.741836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:48.741860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:48.741892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:19:48.741913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:19:48.741957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:19:48.741978 | orchestrator | 2026-02-05 03:19:48.741999 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-05 03:19:48.742109 | orchestrator | Thursday 05 February 2026 03:19:36 +0000 (0:00:16.104) 0:03:02.606 ***** 2026-02-05 03:19:48.742134 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:19:48.742256 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:19:48.742346 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:19:48.742366 | orchestrator | 2026-02-05 03:19:48.742383 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-05 03:19:48.742401 | orchestrator | Thursday 05 February 2026 03:19:38 +0000 (0:00:01.687) 0:03:04.294 ***** 2026-02-05 03:19:48.742418 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-05 03:19:48.742435 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-05 03:19:48.742453 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-05 03:19:48.742470 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-05 03:19:48.742487 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-05 03:19:48.742504 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-05 03:19:48.742522 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-05 03:19:48.742540 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-05 03:19:48.742559 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-05 03:19:48.742591 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-05 03:19:48.742612 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-05 03:19:48.742631 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-05 03:19:48.742648 | orchestrator | 2026-02-05 03:19:48.742666 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-05 03:19:48.742699 | orchestrator | Thursday 05 February 2026 03:19:43 +0000 (0:00:05.216) 0:03:09.511 ***** 2026-02-05 03:19:48.742719 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-05 03:19:48.742734 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-05 03:19:48.742770 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-05 03:19:56.837094 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-05 03:19:56.837201 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-05 03:19:56.837214 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-05 03:19:56.837225 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-05 03:19:56.837235 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-05 03:19:56.837245 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-05 03:19:56.837255 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-05 03:19:56.837265 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-05 03:19:56.837275 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-05 03:19:56.837286 | orchestrator | 2026-02-05 03:19:56.837298 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-05 03:19:56.837310 | orchestrator | Thursday 05 February 2026 03:19:48 +0000 (0:00:04.992) 0:03:14.504 ***** 2026-02-05 03:19:56.837321 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-05 03:19:56.837332 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-05 03:19:56.837343 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-05 03:19:56.837354 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-05 03:19:56.837365 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-05 03:19:56.837375 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-05 03:19:56.837386 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-05 03:19:56.837397 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-05 03:19:56.837412 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-05 03:19:56.837438 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-05 03:19:56.837460 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-05 03:19:56.837651 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-05 03:19:56.837671 | orchestrator | 2026-02-05 03:19:56.837688 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-05 03:19:56.837707 | orchestrator | Thursday 05 February 2026 03:19:53 +0000 (0:00:05.146) 0:03:19.650 ***** 2026-02-05 03:19:56.837732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:56.837779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:56.837872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 03:19:56.837897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:56.838002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:56.838104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 03:19:56.838118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:56.838131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:56.838162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 03:19:56.838186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:21:13.815874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:21:13.816046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 03:21:13.816066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:21:13.816078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:21:13.816115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 03:21:13.816128 | orchestrator | 2026-02-05 03:21:13.816156 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 03:21:13.816170 | orchestrator | Thursday 05 February 2026 03:19:57 +0000 (0:00:03.640) 0:03:23.291 ***** 2026-02-05 03:21:13.816181 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:21:13.816193 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:21:13.816204 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:21:13.816215 | orchestrator | 2026-02-05 03:21:13.816226 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-05 03:21:13.816237 | orchestrator | Thursday 05 February 2026 03:19:58 +0000 (0:00:00.509) 0:03:23.800 ***** 2026-02-05 03:21:13.816248 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:21:13.816259 | orchestrator | 2026-02-05 03:21:13.816270 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-05 03:21:13.816281 | orchestrator | Thursday 05 February 2026 03:20:00 +0000 (0:00:02.216) 0:03:26.016 ***** 2026-02-05 03:21:13.816291 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:21:13.816302 | orchestrator | 2026-02-05 03:21:13.816313 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-05 03:21:13.816324 | orchestrator | Thursday 05 February 2026 03:20:02 +0000 (0:00:02.364) 0:03:28.381 ***** 2026-02-05 03:21:13.816335 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:21:13.816347 | orchestrator | 2026-02-05 03:21:13.816357 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-05 03:21:13.816369 | orchestrator | Thursday 05 February 2026 03:20:05 +0000 (0:00:02.431) 0:03:30.812 ***** 2026-02-05 03:21:13.816398 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:21:13.816412 | orchestrator | 2026-02-05 03:21:13.816425 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-05 03:21:13.816438 | orchestrator | Thursday 05 February 2026 03:20:07 +0000 (0:00:02.348) 0:03:33.161 ***** 2026-02-05 03:21:13.816451 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:21:13.816464 | orchestrator | 2026-02-05 03:21:13.816477 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-05 03:21:13.816489 | orchestrator | Thursday 05 February 2026 03:20:31 +0000 (0:00:23.620) 0:03:56.781 ***** 2026-02-05 03:21:13.816502 | orchestrator | 2026-02-05 03:21:13.816514 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-05 03:21:13.816527 | orchestrator | Thursday 05 February 2026 03:20:31 +0000 (0:00:00.070) 0:03:56.851 ***** 2026-02-05 03:21:13.816540 | orchestrator | 2026-02-05 03:21:13.816553 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-05 03:21:13.816565 | orchestrator | Thursday 05 February 2026 03:20:31 +0000 (0:00:00.064) 0:03:56.916 ***** 2026-02-05 03:21:13.816578 | orchestrator | 2026-02-05 03:21:13.816591 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-05 03:21:13.816605 | orchestrator | Thursday 05 February 2026 03:20:31 +0000 (0:00:00.068) 0:03:56.985 ***** 2026-02-05 03:21:13.816626 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:21:13.816639 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:21:13.816652 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:21:13.816666 | orchestrator | 2026-02-05 03:21:13.816679 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-05 03:21:13.816692 | orchestrator | Thursday 05 February 2026 03:20:41 +0000 (0:00:10.614) 0:04:07.600 ***** 2026-02-05 03:21:13.816705 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:21:13.816718 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:21:13.816731 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:21:13.816743 | orchestrator | 2026-02-05 03:21:13.816754 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-05 03:21:13.816765 | orchestrator | Thursday 05 February 2026 03:20:53 +0000 (0:00:11.304) 0:04:18.904 ***** 2026-02-05 03:21:13.816775 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:21:13.816786 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:21:13.816797 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:21:13.816808 | orchestrator | 2026-02-05 03:21:13.816819 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-05 03:21:13.816830 | orchestrator | Thursday 05 February 2026 03:20:58 +0000 (0:00:05.329) 0:04:24.234 ***** 2026-02-05 03:21:13.816840 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:21:13.816851 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:21:13.816862 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:21:13.816873 | orchestrator | 2026-02-05 03:21:13.816884 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-05 03:21:13.816919 | orchestrator | Thursday 05 February 2026 03:21:08 +0000 (0:00:09.954) 0:04:34.188 ***** 2026-02-05 03:21:13.816939 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:21:13.816958 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:21:13.816969 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:21:13.816980 | orchestrator | 2026-02-05 03:21:13.816991 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:21:13.817003 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 03:21:13.817015 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 03:21:13.817028 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 03:21:13.817046 | orchestrator | 2026-02-05 03:21:13.817062 | orchestrator | 2026-02-05 03:21:13.817079 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:21:13.817096 | orchestrator | Thursday 05 February 2026 03:21:13 +0000 (0:00:05.376) 0:04:39.565 ***** 2026-02-05 03:21:13.817113 | orchestrator | =============================================================================== 2026-02-05 03:21:13.817131 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.62s 2026-02-05 03:21:13.817150 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.10s 2026-02-05 03:21:13.817176 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.05s 2026-02-05 03:21:13.817195 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.72s 2026-02-05 03:21:13.817213 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.30s 2026-02-05 03:21:13.817230 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.79s 2026-02-05 03:21:13.817250 | orchestrator | octavia : Restart octavia-api container -------------------------------- 10.61s 2026-02-05 03:21:13.817269 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.95s 2026-02-05 03:21:13.817289 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.51s 2026-02-05 03:21:13.817308 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.35s 2026-02-05 03:21:13.817328 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.17s 2026-02-05 03:21:13.817339 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.77s 2026-02-05 03:21:13.817353 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.82s 2026-02-05 03:21:13.817371 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.52s 2026-02-05 03:21:13.817412 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.38s 2026-02-05 03:21:14.271565 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.33s 2026-02-05 03:21:14.271654 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.22s 2026-02-05 03:21:14.271668 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.15s 2026-02-05 03:21:14.271679 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.00s 2026-02-05 03:21:14.271690 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 4.99s 2026-02-05 03:21:16.602202 | orchestrator | 2026-02-05 03:21:16 | INFO  | Task 1dca56a6-c7ee-4c51-b996-d6ab7dbfc7a5 (ceilometer) was prepared for execution. 2026-02-05 03:21:16.602279 | orchestrator | 2026-02-05 03:21:16 | INFO  | It takes a moment until task 1dca56a6-c7ee-4c51-b996-d6ab7dbfc7a5 (ceilometer) has been started and output is visible here. 2026-02-05 03:21:40.426810 | orchestrator | 2026-02-05 03:21:40.426938 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:21:40.426951 | orchestrator | 2026-02-05 03:21:40.426959 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:21:40.426966 | orchestrator | Thursday 05 February 2026 03:21:20 +0000 (0:00:00.262) 0:00:00.262 ***** 2026-02-05 03:21:40.426974 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:21:40.426986 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:21:40.426996 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:21:40.427006 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:21:40.427016 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:21:40.427026 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:21:40.427036 | orchestrator | 2026-02-05 03:21:40.427046 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:21:40.427057 | orchestrator | Thursday 05 February 2026 03:21:21 +0000 (0:00:00.739) 0:00:01.001 ***** 2026-02-05 03:21:40.427069 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-05 03:21:40.427081 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-05 03:21:40.427092 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-05 03:21:40.427104 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-05 03:21:40.427112 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-05 03:21:40.427118 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-05 03:21:40.427125 | orchestrator | 2026-02-05 03:21:40.427131 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-05 03:21:40.427138 | orchestrator | 2026-02-05 03:21:40.427144 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-05 03:21:40.427150 | orchestrator | Thursday 05 February 2026 03:21:22 +0000 (0:00:00.605) 0:00:01.607 ***** 2026-02-05 03:21:40.427157 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 03:21:40.427165 | orchestrator | 2026-02-05 03:21:40.427171 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-05 03:21:40.427177 | orchestrator | Thursday 05 February 2026 03:21:23 +0000 (0:00:01.220) 0:00:02.828 ***** 2026-02-05 03:21:40.427183 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:21:40.427190 | orchestrator | 2026-02-05 03:21:40.427196 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-05 03:21:40.427221 | orchestrator | Thursday 05 February 2026 03:21:23 +0000 (0:00:00.119) 0:00:02.947 ***** 2026-02-05 03:21:40.427228 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:21:40.427234 | orchestrator | 2026-02-05 03:21:40.427240 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-05 03:21:40.427246 | orchestrator | Thursday 05 February 2026 03:21:23 +0000 (0:00:00.132) 0:00:03.080 ***** 2026-02-05 03:21:40.427253 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 03:21:40.427259 | orchestrator | 2026-02-05 03:21:40.427265 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-05 03:21:40.427271 | orchestrator | Thursday 05 February 2026 03:21:27 +0000 (0:00:03.614) 0:00:06.695 ***** 2026-02-05 03:21:40.427277 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 03:21:40.427283 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-05 03:21:40.427289 | orchestrator | 2026-02-05 03:21:40.427307 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-05 03:21:40.427314 | orchestrator | Thursday 05 February 2026 03:21:31 +0000 (0:00:03.994) 0:00:10.689 ***** 2026-02-05 03:21:40.427320 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 03:21:40.427326 | orchestrator | 2026-02-05 03:21:40.427332 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-05 03:21:40.427338 | orchestrator | Thursday 05 February 2026 03:21:34 +0000 (0:00:03.350) 0:00:14.040 ***** 2026-02-05 03:21:40.427344 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-05 03:21:40.427351 | orchestrator | 2026-02-05 03:21:40.427357 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-05 03:21:40.427363 | orchestrator | Thursday 05 February 2026 03:21:38 +0000 (0:00:04.141) 0:00:18.181 ***** 2026-02-05 03:21:40.427371 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:21:40.427378 | orchestrator | 2026-02-05 03:21:40.427386 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-05 03:21:40.427393 | orchestrator | Thursday 05 February 2026 03:21:38 +0000 (0:00:00.146) 0:00:18.327 ***** 2026-02-05 03:21:40.427403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:21:40.427428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:21:40.427437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:21:40.427451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:21:40.427466 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:21:40.427474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:21:40.427482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:21:40.427495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:21:45.285657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:21:45.285858 | orchestrator | 2026-02-05 03:21:45.286742 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-05 03:21:45.286793 | orchestrator | Thursday 05 February 2026 03:21:40 +0000 (0:00:01.479) 0:00:19.807 ***** 2026-02-05 03:21:45.286812 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 03:21:45.286834 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 03:21:45.286852 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 03:21:45.286871 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 03:21:45.286937 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 03:21:45.286958 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 03:21:45.286975 | orchestrator | 2026-02-05 03:21:45.286994 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-05 03:21:45.287007 | orchestrator | Thursday 05 February 2026 03:21:42 +0000 (0:00:01.760) 0:00:21.568 ***** 2026-02-05 03:21:45.287018 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:21:45.287030 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:21:45.287040 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:21:45.287051 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:21:45.287062 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:21:45.287073 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:21:45.287083 | orchestrator | 2026-02-05 03:21:45.287094 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-05 03:21:45.287105 | orchestrator | Thursday 05 February 2026 03:21:42 +0000 (0:00:00.626) 0:00:22.194 ***** 2026-02-05 03:21:45.287117 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:21:45.287128 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:21:45.287139 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:21:45.287150 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:21:45.287160 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:21:45.287171 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:21:45.287182 | orchestrator | 2026-02-05 03:21:45.287193 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-05 03:21:45.287205 | orchestrator | Thursday 05 February 2026 03:21:43 +0000 (0:00:00.808) 0:00:23.002 ***** 2026-02-05 03:21:45.287215 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:21:45.287226 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:21:45.287237 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:21:45.287248 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:21:45.287258 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:21:45.287269 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:21:45.287280 | orchestrator | 2026-02-05 03:21:45.287333 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-05 03:21:45.287345 | orchestrator | Thursday 05 February 2026 03:21:44 +0000 (0:00:00.610) 0:00:23.612 ***** 2026-02-05 03:21:45.287359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:21:45.287373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:21:45.287401 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:21:45.287440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:21:45.287453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:21:45.287465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:21:45.287481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:21:45.287493 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:21:45.287506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:21:45.287518 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:21:45.287529 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:21:45.287540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:21:45.287558 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:21:45.287579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:21:49.903854 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:21:49.904025 | orchestrator | 2026-02-05 03:21:49.904061 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-05 03:21:49.904081 | orchestrator | Thursday 05 February 2026 03:21:45 +0000 (0:00:01.056) 0:00:24.669 ***** 2026-02-05 03:21:49.904102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:21:49.904125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:21:49.904145 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:21:49.904183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:21:49.904205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:21:49.904253 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:21:49.904274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:21:49.904286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:21:49.904298 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:21:49.904330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:21:49.904343 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:21:49.904355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:21:49.904366 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:21:49.904384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:21:49.904406 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:21:49.904418 | orchestrator | 2026-02-05 03:21:49.904434 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-05 03:21:49.904448 | orchestrator | Thursday 05 February 2026 03:21:46 +0000 (0:00:00.835) 0:00:25.504 ***** 2026-02-05 03:21:49.904462 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 03:21:49.904475 | orchestrator | 2026-02-05 03:21:49.904489 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-05 03:21:49.904502 | orchestrator | Thursday 05 February 2026 03:21:46 +0000 (0:00:00.686) 0:00:26.190 ***** 2026-02-05 03:21:49.904515 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:21:49.904529 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:21:49.904542 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:21:49.904555 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:21:49.904568 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:21:49.904581 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:21:49.904593 | orchestrator | 2026-02-05 03:21:49.904607 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-05 03:21:49.904620 | orchestrator | Thursday 05 February 2026 03:21:47 +0000 (0:00:00.777) 0:00:26.967 ***** 2026-02-05 03:21:49.904631 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:21:49.904642 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:21:49.904653 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:21:49.904664 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:21:49.904674 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:21:49.904685 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:21:49.904696 | orchestrator | 2026-02-05 03:21:49.904707 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-05 03:21:49.904718 | orchestrator | Thursday 05 February 2026 03:21:48 +0000 (0:00:00.937) 0:00:27.904 ***** 2026-02-05 03:21:49.904729 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:21:49.904740 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:21:49.904751 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:21:49.904762 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:21:49.904773 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:21:49.904784 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:21:49.904795 | orchestrator | 2026-02-05 03:21:49.904806 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-05 03:21:49.904818 | orchestrator | Thursday 05 February 2026 03:21:49 +0000 (0:00:00.789) 0:00:28.694 ***** 2026-02-05 03:21:49.904829 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:21:49.904839 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:21:49.904851 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:21:49.904861 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:21:49.904872 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:21:49.904883 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:21:49.904936 | orchestrator | 2026-02-05 03:21:54.972113 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-05 03:21:54.972220 | orchestrator | Thursday 05 February 2026 03:21:49 +0000 (0:00:00.599) 0:00:29.294 ***** 2026-02-05 03:21:54.972237 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 03:21:54.972250 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 03:21:54.972262 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 03:21:54.972273 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 03:21:54.972284 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 03:21:54.972295 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 03:21:54.972305 | orchestrator | 2026-02-05 03:21:54.972317 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-05 03:21:54.972328 | orchestrator | Thursday 05 February 2026 03:21:51 +0000 (0:00:01.509) 0:00:30.803 ***** 2026-02-05 03:21:54.972366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:21:54.972438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:21:54.972462 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:21:54.972482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:21:54.972502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:21:54.972521 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:21:54.972540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:21:54.972583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:21:54.972615 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:21:54.972635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:21:54.972667 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:21:54.972705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:21:54.972734 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:21:54.972766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:21:54.972794 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:21:54.972824 | orchestrator | 2026-02-05 03:21:54.972855 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-05 03:21:54.972914 | orchestrator | Thursday 05 February 2026 03:21:52 +0000 (0:00:00.846) 0:00:31.650 ***** 2026-02-05 03:21:54.972935 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:21:54.972966 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:21:54.972998 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:21:54.973015 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:21:54.973033 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:21:54.973051 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:21:54.973068 | orchestrator | 2026-02-05 03:21:54.973086 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-05 03:21:54.973104 | orchestrator | Thursday 05 February 2026 03:21:53 +0000 (0:00:00.815) 0:00:32.465 ***** 2026-02-05 03:21:54.973121 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 03:21:54.973139 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 03:21:54.973156 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 03:21:54.973173 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 03:21:54.973192 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 03:21:54.973210 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 03:21:54.973229 | orchestrator | 2026-02-05 03:21:54.973248 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-05 03:21:54.973278 | orchestrator | Thursday 05 February 2026 03:21:54 +0000 (0:00:01.455) 0:00:33.920 ***** 2026-02-05 03:21:54.973311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:00.847377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:00.847514 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:22:00.847535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:00.847609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:00.847625 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:22:00.847637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:00.847649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:00.847685 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:22:00.847699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:00.847711 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:22:00.847742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:00.847754 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:22:00.847766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:00.847777 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:22:00.847788 | orchestrator | 2026-02-05 03:22:00.847806 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-05 03:22:00.847818 | orchestrator | Thursday 05 February 2026 03:21:55 +0000 (0:00:01.089) 0:00:35.010 ***** 2026-02-05 03:22:00.847830 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:22:00.847841 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:22:00.847852 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:22:00.847862 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:22:00.847873 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:22:00.847992 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:22:00.848015 | orchestrator | 2026-02-05 03:22:00.848032 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-05 03:22:00.848046 | orchestrator | Thursday 05 February 2026 03:21:56 +0000 (0:00:00.813) 0:00:35.823 ***** 2026-02-05 03:22:00.848059 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:22:00.848072 | orchestrator | 2026-02-05 03:22:00.848086 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-05 03:22:00.848099 | orchestrator | Thursday 05 February 2026 03:21:56 +0000 (0:00:00.149) 0:00:35.973 ***** 2026-02-05 03:22:00.848112 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:22:00.848124 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:22:00.848138 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:22:00.848151 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:22:00.848175 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:22:00.848188 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:22:00.848200 | orchestrator | 2026-02-05 03:22:00.848213 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-05 03:22:00.848226 | orchestrator | Thursday 05 February 2026 03:21:57 +0000 (0:00:00.610) 0:00:36.583 ***** 2026-02-05 03:22:00.848240 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 03:22:00.848254 | orchestrator | 2026-02-05 03:22:00.848267 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-05 03:22:00.848281 | orchestrator | Thursday 05 February 2026 03:21:58 +0000 (0:00:01.382) 0:00:37.966 ***** 2026-02-05 03:22:00.848295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:00.848318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:01.361647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:01.361776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:01.361814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:01.361850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:01.361862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:01.361874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:01.361961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:01.361975 | orchestrator | 2026-02-05 03:22:01.361988 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-05 03:22:01.361999 | orchestrator | Thursday 05 February 2026 03:22:00 +0000 (0:00:02.269) 0:00:40.235 ***** 2026-02-05 03:22:01.362010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:01.362082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:01.362103 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:22:01.362115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:01.362126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:01.362136 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:22:01.362147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:01.362168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:03.226387 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:22:03.226481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:03.226497 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:22:03.226543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:03.226553 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:22:03.226562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:03.226571 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:22:03.226581 | orchestrator | 2026-02-05 03:22:03.226590 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-05 03:22:03.226601 | orchestrator | Thursday 05 February 2026 03:22:01 +0000 (0:00:00.865) 0:00:41.101 ***** 2026-02-05 03:22:03.226611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:03.226622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:03.226649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:03.226663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:03.226679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:03.226689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:03.226698 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:22:03.226708 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:22:03.226717 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:22:03.226726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:03.226736 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:22:03.226745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:03.226754 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:22:03.226772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:10.660350 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:22:10.660456 | orchestrator | 2026-02-05 03:22:10.660475 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-05 03:22:10.660488 | orchestrator | Thursday 05 February 2026 03:22:03 +0000 (0:00:01.508) 0:00:42.609 ***** 2026-02-05 03:22:10.660538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:10.660555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:10.660568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:10.660580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:10.660591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:10.660645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:10.660665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:10.660678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:10.660689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:10.660702 | orchestrator | 2026-02-05 03:22:10.660721 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-05 03:22:10.660741 | orchestrator | Thursday 05 February 2026 03:22:05 +0000 (0:00:02.594) 0:00:45.204 ***** 2026-02-05 03:22:10.660760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:10.660781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:10.660823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:19.399197 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:19.399299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:19.399313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:19.399323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:19.399333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:19.399360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:19.399369 | orchestrator | 2026-02-05 03:22:19.399379 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-05 03:22:19.399401 | orchestrator | Thursday 05 February 2026 03:22:10 +0000 (0:00:04.846) 0:00:50.051 ***** 2026-02-05 03:22:19.399410 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 03:22:19.399419 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 03:22:19.399427 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 03:22:19.399435 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 03:22:19.399442 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 03:22:19.399450 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 03:22:19.399457 | orchestrator | 2026-02-05 03:22:19.399471 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-05 03:22:19.399479 | orchestrator | Thursday 05 February 2026 03:22:12 +0000 (0:00:01.521) 0:00:51.573 ***** 2026-02-05 03:22:19.399487 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:22:19.399494 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:22:19.399502 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:22:19.399509 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:22:19.399517 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:22:19.399525 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:22:19.399532 | orchestrator | 2026-02-05 03:22:19.399539 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-05 03:22:19.399548 | orchestrator | Thursday 05 February 2026 03:22:12 +0000 (0:00:00.579) 0:00:52.153 ***** 2026-02-05 03:22:19.399555 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:22:19.399563 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:22:19.399570 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:22:19.399578 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:22:19.399586 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:22:19.399593 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:22:19.399601 | orchestrator | 2026-02-05 03:22:19.399608 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-05 03:22:19.399616 | orchestrator | Thursday 05 February 2026 03:22:14 +0000 (0:00:01.693) 0:00:53.846 ***** 2026-02-05 03:22:19.399624 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:22:19.399631 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:22:19.399639 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:22:19.399646 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:22:19.399653 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:22:19.399661 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:22:19.399668 | orchestrator | 2026-02-05 03:22:19.399676 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-05 03:22:19.399683 | orchestrator | Thursday 05 February 2026 03:22:15 +0000 (0:00:01.342) 0:00:55.189 ***** 2026-02-05 03:22:19.399691 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 03:22:19.399699 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 03:22:19.399708 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 03:22:19.399721 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 03:22:19.399729 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 03:22:19.399737 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 03:22:19.399745 | orchestrator | 2026-02-05 03:22:19.399753 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-05 03:22:19.399761 | orchestrator | Thursday 05 February 2026 03:22:17 +0000 (0:00:01.304) 0:00:56.494 ***** 2026-02-05 03:22:19.399770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:19.399781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:19.399790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:19.399807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:20.130435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:20.130547 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:20.130606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:20.130632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:20.130653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:20.130673 | orchestrator | 2026-02-05 03:22:20.130686 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-05 03:22:20.130699 | orchestrator | Thursday 05 February 2026 03:22:19 +0000 (0:00:02.293) 0:00:58.787 ***** 2026-02-05 03:22:20.130728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:20.130760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:20.130782 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:22:20.130796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:20.130807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:20.130819 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:22:20.130830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:20.130841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:20.130852 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:22:20.130869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:20.130915 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:22:20.130936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:23.307793 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:22:23.307959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:23.307979 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:22:23.307988 | orchestrator | 2026-02-05 03:22:23.307997 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-05 03:22:23.308007 | orchestrator | Thursday 05 February 2026 03:22:20 +0000 (0:00:00.736) 0:00:59.523 ***** 2026-02-05 03:22:23.308016 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:22:23.308025 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:22:23.308034 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:22:23.308042 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:22:23.308051 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:22:23.308059 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:22:23.308068 | orchestrator | 2026-02-05 03:22:23.308076 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-05 03:22:23.308081 | orchestrator | Thursday 05 February 2026 03:22:20 +0000 (0:00:00.674) 0:01:00.198 ***** 2026-02-05 03:22:23.308087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:23.308094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:23.308100 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:22:23.308122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:23.308148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:23.308154 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:22:23.308175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-05 03:22:23.308181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 03:22:23.308186 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:22:23.308191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:23.308196 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:22:23.308201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:23.308206 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:22:23.308214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-05 03:22:23.308225 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:22:23.308230 | orchestrator | 2026-02-05 03:22:23.308235 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-05 03:22:23.308240 | orchestrator | Thursday 05 February 2026 03:22:21 +0000 (0:00:00.771) 0:01:00.969 ***** 2026-02-05 03:22:23.308250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:52.968230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:52.968386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:52.968417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:52.968440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:52.968521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-05 03:22:52.968545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:52.968591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:52.968611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-05 03:22:52.968630 | orchestrator | 2026-02-05 03:22:52.968650 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-05 03:22:52.968669 | orchestrator | Thursday 05 February 2026 03:22:23 +0000 (0:00:01.729) 0:01:02.699 ***** 2026-02-05 03:22:52.968796 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:22:52.968826 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:22:52.968846 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:22:52.968864 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:22:52.969084 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:22:52.969104 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:22:52.969116 | orchestrator | 2026-02-05 03:22:52.969128 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-05 03:22:52.969141 | orchestrator | Thursday 05 February 2026 03:22:23 +0000 (0:00:00.548) 0:01:03.247 ***** 2026-02-05 03:22:52.969152 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:22:52.969163 | orchestrator | 2026-02-05 03:22:52.969174 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-05 03:22:52.969217 | orchestrator | Thursday 05 February 2026 03:22:28 +0000 (0:00:04.899) 0:01:08.147 ***** 2026-02-05 03:22:52.969228 | orchestrator | 2026-02-05 03:22:52.969240 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-05 03:22:52.969267 | orchestrator | Thursday 05 February 2026 03:22:28 +0000 (0:00:00.072) 0:01:08.220 ***** 2026-02-05 03:22:52.969289 | orchestrator | 2026-02-05 03:22:52.969300 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-05 03:22:52.969311 | orchestrator | Thursday 05 February 2026 03:22:28 +0000 (0:00:00.071) 0:01:08.291 ***** 2026-02-05 03:22:52.969322 | orchestrator | 2026-02-05 03:22:52.969334 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-05 03:22:52.969345 | orchestrator | Thursday 05 February 2026 03:22:29 +0000 (0:00:00.260) 0:01:08.551 ***** 2026-02-05 03:22:52.969356 | orchestrator | 2026-02-05 03:22:52.969367 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-05 03:22:52.969377 | orchestrator | Thursday 05 February 2026 03:22:29 +0000 (0:00:00.071) 0:01:08.623 ***** 2026-02-05 03:22:52.969388 | orchestrator | 2026-02-05 03:22:52.969399 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-05 03:22:52.969410 | orchestrator | Thursday 05 February 2026 03:22:29 +0000 (0:00:00.064) 0:01:08.688 ***** 2026-02-05 03:22:52.969421 | orchestrator | 2026-02-05 03:22:52.969432 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-05 03:22:52.969443 | orchestrator | Thursday 05 February 2026 03:22:29 +0000 (0:00:00.075) 0:01:08.763 ***** 2026-02-05 03:22:52.969453 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:22:52.969476 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:22:52.969487 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:22:52.969498 | orchestrator | 2026-02-05 03:22:52.969509 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-05 03:22:52.969520 | orchestrator | Thursday 05 February 2026 03:22:34 +0000 (0:00:05.302) 0:01:14.066 ***** 2026-02-05 03:22:52.969531 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:22:52.969542 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:22:52.969553 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:22:52.969564 | orchestrator | 2026-02-05 03:22:52.969574 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-05 03:22:52.969585 | orchestrator | Thursday 05 February 2026 03:22:42 +0000 (0:00:07.529) 0:01:21.596 ***** 2026-02-05 03:22:52.969596 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:22:52.969607 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:22:52.969618 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:22:52.969629 | orchestrator | 2026-02-05 03:22:52.969640 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:22:52.969652 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-05 03:22:52.969664 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 03:22:52.969693 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 03:22:53.270698 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-05 03:22:53.270794 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-05 03:22:53.270807 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-05 03:22:53.270823 | orchestrator | 2026-02-05 03:22:53.270842 | orchestrator | 2026-02-05 03:22:53.270859 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:22:53.271030 | orchestrator | Thursday 05 February 2026 03:22:52 +0000 (0:00:10.752) 0:01:32.348 ***** 2026-02-05 03:22:53.271052 | orchestrator | =============================================================================== 2026-02-05 03:22:53.271067 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 10.75s 2026-02-05 03:22:53.271083 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 7.53s 2026-02-05 03:22:53.271098 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 5.30s 2026-02-05 03:22:53.271114 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.90s 2026-02-05 03:22:53.271130 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.85s 2026-02-05 03:22:53.271146 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.14s 2026-02-05 03:22:53.271161 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.99s 2026-02-05 03:22:53.271177 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.61s 2026-02-05 03:22:53.271192 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.35s 2026-02-05 03:22:53.271207 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.59s 2026-02-05 03:22:53.271224 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.29s 2026-02-05 03:22:53.271240 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.27s 2026-02-05 03:22:53.271257 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.76s 2026-02-05 03:22:53.271274 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.73s 2026-02-05 03:22:53.271291 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.69s 2026-02-05 03:22:53.271308 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.52s 2026-02-05 03:22:53.271324 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.51s 2026-02-05 03:22:53.271340 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.51s 2026-02-05 03:22:53.271357 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.48s 2026-02-05 03:22:53.271374 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 1.46s 2026-02-05 03:22:55.266163 | orchestrator | 2026-02-05 03:22:55 | INFO  | Task 1a6e00e7-63c7-4be9-83b4-9ae543168453 (aodh) was prepared for execution. 2026-02-05 03:22:55.266259 | orchestrator | 2026-02-05 03:22:55 | INFO  | It takes a moment until task 1a6e00e7-63c7-4be9-83b4-9ae543168453 (aodh) has been started and output is visible here. 2026-02-05 03:23:28.866616 | orchestrator | 2026-02-05 03:23:28.866712 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:23:28.866729 | orchestrator | 2026-02-05 03:23:28.866743 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:23:28.866753 | orchestrator | Thursday 05 February 2026 03:22:59 +0000 (0:00:00.265) 0:00:00.265 ***** 2026-02-05 03:23:28.866762 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:23:28.866785 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:23:28.866793 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:23:28.866802 | orchestrator | 2026-02-05 03:23:28.866823 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:23:28.866832 | orchestrator | Thursday 05 February 2026 03:22:59 +0000 (0:00:00.327) 0:00:00.592 ***** 2026-02-05 03:23:28.866840 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-05 03:23:28.866849 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-05 03:23:28.866857 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-05 03:23:28.866916 | orchestrator | 2026-02-05 03:23:28.866925 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-05 03:23:28.866934 | orchestrator | 2026-02-05 03:23:28.866973 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-05 03:23:28.866982 | orchestrator | Thursday 05 February 2026 03:23:00 +0000 (0:00:00.436) 0:00:01.029 ***** 2026-02-05 03:23:28.866999 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:23:28.867008 | orchestrator | 2026-02-05 03:23:28.867016 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-05 03:23:28.867025 | orchestrator | Thursday 05 February 2026 03:23:00 +0000 (0:00:00.600) 0:00:01.629 ***** 2026-02-05 03:23:28.867033 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-05 03:23:28.867041 | orchestrator | 2026-02-05 03:23:28.867049 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-05 03:23:28.867057 | orchestrator | Thursday 05 February 2026 03:23:04 +0000 (0:00:03.732) 0:00:05.362 ***** 2026-02-05 03:23:28.867065 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-05 03:23:28.867073 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-05 03:23:28.867081 | orchestrator | 2026-02-05 03:23:28.867089 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-05 03:23:28.867097 | orchestrator | Thursday 05 February 2026 03:23:11 +0000 (0:00:07.025) 0:00:12.387 ***** 2026-02-05 03:23:28.867105 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 03:23:28.867114 | orchestrator | 2026-02-05 03:23:28.867122 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-05 03:23:28.867129 | orchestrator | Thursday 05 February 2026 03:23:15 +0000 (0:00:03.653) 0:00:16.040 ***** 2026-02-05 03:23:28.867137 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 03:23:28.867145 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-05 03:23:28.867153 | orchestrator | 2026-02-05 03:23:28.867161 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-05 03:23:28.867171 | orchestrator | Thursday 05 February 2026 03:23:19 +0000 (0:00:04.110) 0:00:20.151 ***** 2026-02-05 03:23:28.867180 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 03:23:28.867190 | orchestrator | 2026-02-05 03:23:28.867199 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-05 03:23:28.867208 | orchestrator | Thursday 05 February 2026 03:23:22 +0000 (0:00:03.542) 0:00:23.693 ***** 2026-02-05 03:23:28.867218 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-05 03:23:28.867227 | orchestrator | 2026-02-05 03:23:28.867236 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-05 03:23:28.867245 | orchestrator | Thursday 05 February 2026 03:23:26 +0000 (0:00:03.930) 0:00:27.623 ***** 2026-02-05 03:23:28.867257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:28.867295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:28.867312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:28.867322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:28.867331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:28.867340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:28.867348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:28.867381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:30.225563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:30.225715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:30.225768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:30.225781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:30.225791 | orchestrator | 2026-02-05 03:23:30.225802 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-05 03:23:30.225813 | orchestrator | Thursday 05 February 2026 03:23:28 +0000 (0:00:02.069) 0:00:29.693 ***** 2026-02-05 03:23:30.225823 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:23:30.225834 | orchestrator | 2026-02-05 03:23:30.225842 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-05 03:23:30.225852 | orchestrator | Thursday 05 February 2026 03:23:28 +0000 (0:00:00.130) 0:00:29.823 ***** 2026-02-05 03:23:30.225908 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:23:30.225920 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:23:30.225929 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:23:30.225938 | orchestrator | 2026-02-05 03:23:30.225947 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-05 03:23:30.225956 | orchestrator | Thursday 05 February 2026 03:23:29 +0000 (0:00:00.541) 0:00:30.364 ***** 2026-02-05 03:23:30.225966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 03:23:30.226092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 03:23:30.226107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:23:30.226119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 03:23:30.226129 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:23:30.226141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 03:23:30.226152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 03:23:30.226171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:23:30.226188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 03:23:35.218194 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:23:35.218314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 03:23:35.218332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 03:23:35.218344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:23:35.218353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 03:23:35.218390 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:23:35.218400 | orchestrator | 2026-02-05 03:23:35.218410 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-05 03:23:35.218420 | orchestrator | Thursday 05 February 2026 03:23:30 +0000 (0:00:00.691) 0:00:31.056 ***** 2026-02-05 03:23:35.218430 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:23:35.218440 | orchestrator | 2026-02-05 03:23:35.218448 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-05 03:23:35.218457 | orchestrator | Thursday 05 February 2026 03:23:30 +0000 (0:00:00.774) 0:00:31.830 ***** 2026-02-05 03:23:35.218466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:35.218498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:35.218509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:35.218519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:35.218535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:35.218544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:35.218553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:35.218611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:35.854126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:35.854251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:35.854268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:35.854412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:35.854430 | orchestrator | 2026-02-05 03:23:35.854444 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-05 03:23:35.854457 | orchestrator | Thursday 05 February 2026 03:23:35 +0000 (0:00:04.216) 0:00:36.046 ***** 2026-02-05 03:23:35.854470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 03:23:35.854497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 03:23:35.854531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:23:35.854544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 03:23:35.854555 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:23:35.854568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 03:23:35.854589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 03:23:35.854603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:23:35.854622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 03:23:35.854635 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:23:35.854658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 03:23:36.886297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 03:23:36.886413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:23:36.886429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 03:23:36.886439 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:23:36.886451 | orchestrator | 2026-02-05 03:23:36.886461 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-05 03:23:36.886472 | orchestrator | Thursday 05 February 2026 03:23:35 +0000 (0:00:00.638) 0:00:36.685 ***** 2026-02-05 03:23:36.886482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 03:23:36.886505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 03:23:36.886516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:23:36.886542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 03:23:36.886559 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:23:36.886569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 03:23:36.886579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 03:23:36.886588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:23:36.886602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 03:23:36.886612 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:23:36.886628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 03:23:41.091111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 03:23:41.091247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 03:23:41.091276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 03:23:41.091291 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:23:41.091305 | orchestrator | 2026-02-05 03:23:41.091318 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-05 03:23:41.091331 | orchestrator | Thursday 05 February 2026 03:23:36 +0000 (0:00:01.028) 0:00:37.713 ***** 2026-02-05 03:23:41.091343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:41.091375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:41.091407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:41.091442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:41.091454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:41.091466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:41.091477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:41.091494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:41.091506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:41.091533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:49.565687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:49.565797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:49.565815 | orchestrator | 2026-02-05 03:23:49.565830 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-05 03:23:49.565844 | orchestrator | Thursday 05 February 2026 03:23:41 +0000 (0:00:04.201) 0:00:41.914 ***** 2026-02-05 03:23:49.565944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:49.565979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:49.566095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:49.566154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:49.566174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:49.566192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:49.566210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:49.566291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:49.566332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:49.566353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:49.566386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:54.709043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:54.709144 | orchestrator | 2026-02-05 03:23:54.709158 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-05 03:23:54.709169 | orchestrator | Thursday 05 February 2026 03:23:49 +0000 (0:00:08.478) 0:00:50.392 ***** 2026-02-05 03:23:54.709178 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:23:54.709189 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:23:54.709198 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:23:54.709206 | orchestrator | 2026-02-05 03:23:54.709215 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-05 03:23:54.709224 | orchestrator | Thursday 05 February 2026 03:23:51 +0000 (0:00:01.767) 0:00:52.160 ***** 2026-02-05 03:23:54.709235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:54.709284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:54.709295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 03:23:54.709319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:54.709330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:54.709339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-05 03:23:54.709348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:54.709367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:54.709377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:54.709386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:23:54.709401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:24:50.176125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-05 03:24:50.176211 | orchestrator | 2026-02-05 03:24:50.176221 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-05 03:24:50.176229 | orchestrator | Thursday 05 February 2026 03:23:54 +0000 (0:00:03.376) 0:00:55.537 ***** 2026-02-05 03:24:50.176234 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:24:50.176241 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:24:50.176246 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:24:50.176251 | orchestrator | 2026-02-05 03:24:50.176256 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-05 03:24:50.176261 | orchestrator | Thursday 05 February 2026 03:23:55 +0000 (0:00:00.310) 0:00:55.847 ***** 2026-02-05 03:24:50.176286 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:24:50.176291 | orchestrator | 2026-02-05 03:24:50.176296 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-05 03:24:50.176302 | orchestrator | Thursday 05 February 2026 03:23:57 +0000 (0:00:02.260) 0:00:58.107 ***** 2026-02-05 03:24:50.176307 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:24:50.176312 | orchestrator | 2026-02-05 03:24:50.176317 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-05 03:24:50.176322 | orchestrator | Thursday 05 February 2026 03:23:59 +0000 (0:00:02.566) 0:01:00.673 ***** 2026-02-05 03:24:50.176327 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:24:50.176332 | orchestrator | 2026-02-05 03:24:50.176337 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-05 03:24:50.176342 | orchestrator | Thursday 05 February 2026 03:24:13 +0000 (0:00:13.717) 0:01:14.390 ***** 2026-02-05 03:24:50.176347 | orchestrator | 2026-02-05 03:24:50.176352 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-05 03:24:50.176357 | orchestrator | Thursday 05 February 2026 03:24:13 +0000 (0:00:00.071) 0:01:14.462 ***** 2026-02-05 03:24:50.176362 | orchestrator | 2026-02-05 03:24:50.176379 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-05 03:24:50.176384 | orchestrator | Thursday 05 February 2026 03:24:13 +0000 (0:00:00.089) 0:01:14.552 ***** 2026-02-05 03:24:50.176390 | orchestrator | 2026-02-05 03:24:50.176395 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-05 03:24:50.176400 | orchestrator | Thursday 05 February 2026 03:24:13 +0000 (0:00:00.283) 0:01:14.835 ***** 2026-02-05 03:24:50.176405 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:24:50.176410 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:24:50.176415 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:24:50.176420 | orchestrator | 2026-02-05 03:24:50.176425 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-05 03:24:50.176430 | orchestrator | Thursday 05 February 2026 03:24:24 +0000 (0:00:10.560) 0:01:25.396 ***** 2026-02-05 03:24:50.176435 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:24:50.176440 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:24:50.176445 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:24:50.176450 | orchestrator | 2026-02-05 03:24:50.176455 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-05 03:24:50.176461 | orchestrator | Thursday 05 February 2026 03:24:34 +0000 (0:00:09.960) 0:01:35.356 ***** 2026-02-05 03:24:50.176466 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:24:50.176471 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:24:50.176476 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:24:50.176481 | orchestrator | 2026-02-05 03:24:50.176486 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-05 03:24:50.176491 | orchestrator | Thursday 05 February 2026 03:24:39 +0000 (0:00:05.002) 0:01:40.358 ***** 2026-02-05 03:24:50.176496 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:24:50.176501 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:24:50.176506 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:24:50.176511 | orchestrator | 2026-02-05 03:24:50.176516 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:24:50.176522 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 03:24:50.176529 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 03:24:50.176534 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 03:24:50.176539 | orchestrator | 2026-02-05 03:24:50.176544 | orchestrator | 2026-02-05 03:24:50.176549 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:24:50.176612 | orchestrator | Thursday 05 February 2026 03:24:49 +0000 (0:00:10.303) 0:01:50.662 ***** 2026-02-05 03:24:50.176618 | orchestrator | =============================================================================== 2026-02-05 03:24:50.176623 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.72s 2026-02-05 03:24:50.176629 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.56s 2026-02-05 03:24:50.176644 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.30s 2026-02-05 03:24:50.176650 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 9.96s 2026-02-05 03:24:50.176655 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.48s 2026-02-05 03:24:50.176660 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 7.03s 2026-02-05 03:24:50.176665 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 5.00s 2026-02-05 03:24:50.176670 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.22s 2026-02-05 03:24:50.176675 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.20s 2026-02-05 03:24:50.176681 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 4.11s 2026-02-05 03:24:50.176687 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.93s 2026-02-05 03:24:50.176693 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.73s 2026-02-05 03:24:50.176698 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.65s 2026-02-05 03:24:50.176704 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.54s 2026-02-05 03:24:50.176710 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.38s 2026-02-05 03:24:50.176716 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.57s 2026-02-05 03:24:50.176722 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.26s 2026-02-05 03:24:50.176728 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.07s 2026-02-05 03:24:50.176735 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.77s 2026-02-05 03:24:50.176741 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.03s 2026-02-05 03:24:52.567729 | orchestrator | 2026-02-05 03:24:52 | INFO  | Task c92f1b0d-a39c-4cf8-822d-7253d684d5ba (kolla-ceph-rgw) was prepared for execution. 2026-02-05 03:24:52.567827 | orchestrator | 2026-02-05 03:24:52 | INFO  | It takes a moment until task c92f1b0d-a39c-4cf8-822d-7253d684d5ba (kolla-ceph-rgw) has been started and output is visible here. 2026-02-05 03:25:27.058963 | orchestrator | 2026-02-05 03:25:27.059067 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:25:27.059083 | orchestrator | 2026-02-05 03:25:27.059094 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:25:27.059121 | orchestrator | Thursday 05 February 2026 03:24:56 +0000 (0:00:00.280) 0:00:00.280 ***** 2026-02-05 03:25:27.059132 | orchestrator | ok: [testbed-manager] 2026-02-05 03:25:27.059143 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:25:27.059153 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:25:27.059163 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:25:27.059173 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:25:27.059183 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:25:27.059193 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:25:27.059203 | orchestrator | 2026-02-05 03:25:27.059218 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:25:27.059234 | orchestrator | Thursday 05 February 2026 03:24:57 +0000 (0:00:00.880) 0:00:01.160 ***** 2026-02-05 03:25:27.059251 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-05 03:25:27.059268 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-05 03:25:27.059285 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-05 03:25:27.059326 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-05 03:25:27.059337 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-05 03:25:27.059347 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-05 03:25:27.059357 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-05 03:25:27.059366 | orchestrator | 2026-02-05 03:25:27.059376 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-05 03:25:27.059386 | orchestrator | 2026-02-05 03:25:27.059395 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-05 03:25:27.059405 | orchestrator | Thursday 05 February 2026 03:24:58 +0000 (0:00:00.782) 0:00:01.943 ***** 2026-02-05 03:25:27.059422 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 03:25:27.059440 | orchestrator | 2026-02-05 03:25:27.059456 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-05 03:25:27.059472 | orchestrator | Thursday 05 February 2026 03:25:00 +0000 (0:00:01.637) 0:00:03.580 ***** 2026-02-05 03:25:27.059487 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-05 03:25:27.059503 | orchestrator | 2026-02-05 03:25:27.059518 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-05 03:25:27.059534 | orchestrator | Thursday 05 February 2026 03:25:04 +0000 (0:00:03.933) 0:00:07.514 ***** 2026-02-05 03:25:27.059552 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-05 03:25:27.059572 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-05 03:25:27.059590 | orchestrator | 2026-02-05 03:25:27.059609 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-05 03:25:27.059626 | orchestrator | Thursday 05 February 2026 03:25:10 +0000 (0:00:06.393) 0:00:13.907 ***** 2026-02-05 03:25:27.059641 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-05 03:25:27.059653 | orchestrator | 2026-02-05 03:25:27.059664 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-05 03:25:27.059676 | orchestrator | Thursday 05 February 2026 03:25:13 +0000 (0:00:02.856) 0:00:16.764 ***** 2026-02-05 03:25:27.059688 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 03:25:27.059700 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-05 03:25:27.059711 | orchestrator | 2026-02-05 03:25:27.059722 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-05 03:25:27.059734 | orchestrator | Thursday 05 February 2026 03:25:16 +0000 (0:00:03.653) 0:00:20.417 ***** 2026-02-05 03:25:27.059745 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-05 03:25:27.059757 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-05 03:25:27.059769 | orchestrator | 2026-02-05 03:25:27.059780 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-05 03:25:27.059791 | orchestrator | Thursday 05 February 2026 03:25:22 +0000 (0:00:05.575) 0:00:25.992 ***** 2026-02-05 03:25:27.059803 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-05 03:25:27.059814 | orchestrator | 2026-02-05 03:25:27.059826 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:25:27.059836 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:25:27.059872 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:25:27.059884 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:25:27.059905 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:25:27.059915 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:25:27.059947 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:25:27.059958 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:25:27.059968 | orchestrator | 2026-02-05 03:25:27.059985 | orchestrator | 2026-02-05 03:25:27.059995 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:25:27.060005 | orchestrator | Thursday 05 February 2026 03:25:26 +0000 (0:00:04.239) 0:00:30.232 ***** 2026-02-05 03:25:27.060015 | orchestrator | =============================================================================== 2026-02-05 03:25:27.060024 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.39s 2026-02-05 03:25:27.060034 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.58s 2026-02-05 03:25:27.060065 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.24s 2026-02-05 03:25:27.060075 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.93s 2026-02-05 03:25:27.060085 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.65s 2026-02-05 03:25:27.060094 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.86s 2026-02-05 03:25:27.060104 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.64s 2026-02-05 03:25:27.060114 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.88s 2026-02-05 03:25:27.060124 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2026-02-05 03:25:29.068774 | orchestrator | 2026-02-05 03:25:29 | INFO  | Task 72e24243-058d-4b40-a524-81b4fa4d59ba (gnocchi) was prepared for execution. 2026-02-05 03:25:29.068885 | orchestrator | 2026-02-05 03:25:29 | INFO  | It takes a moment until task 72e24243-058d-4b40-a524-81b4fa4d59ba (gnocchi) has been started and output is visible here. 2026-02-05 03:25:33.921949 | orchestrator | 2026-02-05 03:25:33.922141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:25:33.922168 | orchestrator | 2026-02-05 03:25:33.922183 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:25:33.922197 | orchestrator | Thursday 05 February 2026 03:25:32 +0000 (0:00:00.237) 0:00:00.237 ***** 2026-02-05 03:25:33.922212 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:25:33.922229 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:25:33.922243 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:25:33.922257 | orchestrator | 2026-02-05 03:25:33.922272 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:25:33.922286 | orchestrator | Thursday 05 February 2026 03:25:33 +0000 (0:00:00.305) 0:00:00.543 ***** 2026-02-05 03:25:33.922300 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-05 03:25:33.922315 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-05 03:25:33.922330 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-05 03:25:33.922339 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-05 03:25:33.922347 | orchestrator | 2026-02-05 03:25:33.922355 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-05 03:25:33.922364 | orchestrator | skipping: no hosts matched 2026-02-05 03:25:33.922372 | orchestrator | 2026-02-05 03:25:33.922380 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:25:33.922389 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:25:33.922421 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:25:33.922429 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:25:33.922438 | orchestrator | 2026-02-05 03:25:33.922447 | orchestrator | 2026-02-05 03:25:33.922457 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:25:33.922466 | orchestrator | Thursday 05 February 2026 03:25:33 +0000 (0:00:00.374) 0:00:00.917 ***** 2026-02-05 03:25:33.922476 | orchestrator | =============================================================================== 2026-02-05 03:25:33.922485 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-02-05 03:25:33.922494 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-02-05 03:25:36.229419 | orchestrator | 2026-02-05 03:25:36 | INFO  | Task b9589d2b-3260-4331-98e1-58842c31cea0 (manila) was prepared for execution. 2026-02-05 03:25:36.229553 | orchestrator | 2026-02-05 03:25:36 | INFO  | It takes a moment until task b9589d2b-3260-4331-98e1-58842c31cea0 (manila) has been started and output is visible here. 2026-02-05 03:26:19.095991 | orchestrator | 2026-02-05 03:26:19.096107 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:26:19.096126 | orchestrator | 2026-02-05 03:26:19.096138 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:26:19.096150 | orchestrator | Thursday 05 February 2026 03:25:40 +0000 (0:00:00.276) 0:00:00.276 ***** 2026-02-05 03:26:19.096161 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:26:19.096173 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:26:19.096184 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:26:19.096195 | orchestrator | 2026-02-05 03:26:19.096207 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:26:19.096218 | orchestrator | Thursday 05 February 2026 03:25:40 +0000 (0:00:00.325) 0:00:00.601 ***** 2026-02-05 03:26:19.096229 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-05 03:26:19.096240 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-05 03:26:19.096251 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-05 03:26:19.096262 | orchestrator | 2026-02-05 03:26:19.096291 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-05 03:26:19.096303 | orchestrator | 2026-02-05 03:26:19.096314 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-05 03:26:19.096325 | orchestrator | Thursday 05 February 2026 03:25:41 +0000 (0:00:00.444) 0:00:01.046 ***** 2026-02-05 03:26:19.096336 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:26:19.096348 | orchestrator | 2026-02-05 03:26:19.096359 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-05 03:26:19.096370 | orchestrator | Thursday 05 February 2026 03:25:41 +0000 (0:00:00.540) 0:00:01.587 ***** 2026-02-05 03:26:19.096381 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:26:19.096392 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:26:19.096403 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:26:19.096414 | orchestrator | 2026-02-05 03:26:19.096425 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-05 03:26:19.096436 | orchestrator | Thursday 05 February 2026 03:25:42 +0000 (0:00:00.464) 0:00:02.051 ***** 2026-02-05 03:26:19.096448 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-05 03:26:19.096461 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-05 03:26:19.096475 | orchestrator | 2026-02-05 03:26:19.096489 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-05 03:26:19.096527 | orchestrator | Thursday 05 February 2026 03:25:49 +0000 (0:00:06.906) 0:00:08.957 ***** 2026-02-05 03:26:19.096540 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-05 03:26:19.096555 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-05 03:26:19.096568 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-05 03:26:19.096581 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-05 03:26:19.096595 | orchestrator | 2026-02-05 03:26:19.096607 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-05 03:26:19.096621 | orchestrator | Thursday 05 February 2026 03:26:02 +0000 (0:00:13.446) 0:00:22.403 ***** 2026-02-05 03:26:19.096634 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 03:26:19.096647 | orchestrator | 2026-02-05 03:26:19.096660 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-05 03:26:19.096674 | orchestrator | Thursday 05 February 2026 03:26:05 +0000 (0:00:03.275) 0:00:25.678 ***** 2026-02-05 03:26:19.096687 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 03:26:19.096700 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-05 03:26:19.096713 | orchestrator | 2026-02-05 03:26:19.096726 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-05 03:26:19.096739 | orchestrator | Thursday 05 February 2026 03:26:09 +0000 (0:00:03.912) 0:00:29.591 ***** 2026-02-05 03:26:19.096752 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 03:26:19.096765 | orchestrator | 2026-02-05 03:26:19.096778 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-05 03:26:19.096791 | orchestrator | Thursday 05 February 2026 03:26:12 +0000 (0:00:03.240) 0:00:32.832 ***** 2026-02-05 03:26:19.096804 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-05 03:26:19.096817 | orchestrator | 2026-02-05 03:26:19.096829 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-05 03:26:19.096840 | orchestrator | Thursday 05 February 2026 03:26:16 +0000 (0:00:03.941) 0:00:36.773 ***** 2026-02-05 03:26:19.096917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:26:19.096943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:26:19.096964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:26:19.096977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:19.097016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:19.097054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:19.097088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:29.502363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:29.502509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:29.502524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:29.502535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:29.502580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:29.502592 | orchestrator | 2026-02-05 03:26:29.502604 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-05 03:26:29.502614 | orchestrator | Thursday 05 February 2026 03:26:19 +0000 (0:00:02.258) 0:00:39.032 ***** 2026-02-05 03:26:29.502624 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:26:29.502633 | orchestrator | 2026-02-05 03:26:29.502642 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-05 03:26:29.502651 | orchestrator | Thursday 05 February 2026 03:26:19 +0000 (0:00:00.564) 0:00:39.597 ***** 2026-02-05 03:26:29.502660 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:26:29.502670 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:26:29.502679 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:26:29.502688 | orchestrator | 2026-02-05 03:26:29.502697 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-05 03:26:29.502705 | orchestrator | Thursday 05 February 2026 03:26:20 +0000 (0:00:00.927) 0:00:40.524 ***** 2026-02-05 03:26:29.502715 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-05 03:26:29.502750 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-05 03:26:29.502760 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-05 03:26:29.502775 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-05 03:26:29.502784 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-05 03:26:29.502793 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-05 03:26:29.502802 | orchestrator | 2026-02-05 03:26:29.502812 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-05 03:26:29.502827 | orchestrator | Thursday 05 February 2026 03:26:22 +0000 (0:00:01.760) 0:00:42.285 ***** 2026-02-05 03:26:29.502869 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-05 03:26:29.502885 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-05 03:26:29.502901 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-05 03:26:29.502916 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-05 03:26:29.502930 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-05 03:26:29.502945 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-05 03:26:29.502959 | orchestrator | 2026-02-05 03:26:29.502974 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-05 03:26:29.502989 | orchestrator | Thursday 05 February 2026 03:26:23 +0000 (0:00:01.217) 0:00:43.502 ***** 2026-02-05 03:26:29.503004 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-05 03:26:29.503019 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-05 03:26:29.503033 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-05 03:26:29.503047 | orchestrator | 2026-02-05 03:26:29.503062 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-05 03:26:29.503075 | orchestrator | Thursday 05 February 2026 03:26:24 +0000 (0:00:00.688) 0:00:44.191 ***** 2026-02-05 03:26:29.503089 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:26:29.503105 | orchestrator | 2026-02-05 03:26:29.503121 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-05 03:26:29.503137 | orchestrator | Thursday 05 February 2026 03:26:24 +0000 (0:00:00.122) 0:00:44.314 ***** 2026-02-05 03:26:29.503154 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:26:29.503169 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:26:29.503184 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:26:29.503199 | orchestrator | 2026-02-05 03:26:29.503215 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-05 03:26:29.503248 | orchestrator | Thursday 05 February 2026 03:26:24 +0000 (0:00:00.479) 0:00:44.794 ***** 2026-02-05 03:26:29.503287 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:26:29.503297 | orchestrator | 2026-02-05 03:26:29.503306 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-05 03:26:29.503315 | orchestrator | Thursday 05 February 2026 03:26:25 +0000 (0:00:00.576) 0:00:45.371 ***** 2026-02-05 03:26:29.503338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:26:30.382630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:26:30.382732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:26:30.382750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:30.382763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:30.382797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:30.382826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:30.382921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:30.382939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:30.382951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:30.382962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:30.382991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:30.383003 | orchestrator | 2026-02-05 03:26:30.383016 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-05 03:26:30.383029 | orchestrator | Thursday 05 February 2026 03:26:29 +0000 (0:00:04.059) 0:00:49.430 ***** 2026-02-05 03:26:30.383050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 03:26:31.050628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:26:31.050744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 03:26:31.050771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 03:26:31.050790 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:26:31.050810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 03:26:31.050887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:26:31.050907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 03:26:31.050958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 03:26:31.050979 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:26:31.050998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 03:26:31.051017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:26:31.051047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 03:26:31.051066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 03:26:31.051083 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:26:31.051103 | orchestrator | 2026-02-05 03:26:31.051123 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-05 03:26:31.051143 | orchestrator | Thursday 05 February 2026 03:26:30 +0000 (0:00:00.894) 0:00:50.325 ***** 2026-02-05 03:26:31.051184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 03:26:35.699217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:26:35.699307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 03:26:35.699334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 03:26:35.699342 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:26:35.699351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 03:26:35.699359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:26:35.699377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 03:26:35.699398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 03:26:35.699405 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:26:35.699412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 03:26:35.699424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:26:35.699431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 03:26:35.699437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 03:26:35.699445 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:26:35.699451 | orchestrator | 2026-02-05 03:26:35.699458 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-05 03:26:35.699467 | orchestrator | Thursday 05 February 2026 03:26:31 +0000 (0:00:00.892) 0:00:51.218 ***** 2026-02-05 03:26:35.699487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:26:42.357580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:26:42.357727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:26:42.357749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:42.357765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:42.357794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:42.357829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:42.357888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:42.357907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:42.357915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:42.357923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:42.357930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:42.357938 | orchestrator | 2026-02-05 03:26:42.357948 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-05 03:26:42.357961 | orchestrator | Thursday 05 February 2026 03:26:35 +0000 (0:00:04.603) 0:00:55.821 ***** 2026-02-05 03:26:42.357975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:26:46.283112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:26:46.283227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:26:46.283244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:46.283258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 03:26:46.283288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:46.283320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 03:26:46.283355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:46.283367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 03:26:46.283379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:46.283391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:46.283407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:26:46.283450 | orchestrator | 2026-02-05 03:26:46.283465 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-05 03:26:46.283479 | orchestrator | Thursday 05 February 2026 03:26:42 +0000 (0:00:06.481) 0:01:02.303 ***** 2026-02-05 03:26:46.283506 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-05 03:26:46.283518 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-05 03:26:46.283529 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-05 03:26:46.283540 | orchestrator | 2026-02-05 03:26:46.283552 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-05 03:26:46.283563 | orchestrator | Thursday 05 February 2026 03:26:45 +0000 (0:00:03.209) 0:01:05.512 ***** 2026-02-05 03:26:46.283584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 03:26:49.565250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:26:49.565367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 03:26:49.565390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 03:26:49.565411 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:26:49.565457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 03:26:49.565507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:26:49.565529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 03:26:49.565575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 03:26:49.565599 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:26:49.565618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 03:26:49.565634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 03:26:49.565653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 03:26:49.565681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 03:26:49.565693 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:26:49.565704 | orchestrator | 2026-02-05 03:26:49.565717 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-05 03:26:49.565729 | orchestrator | Thursday 05 February 2026 03:26:46 +0000 (0:00:00.701) 0:01:06.214 ***** 2026-02-05 03:26:49.565750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:27:31.807260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:27:31.807386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 03:27:31.807456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:27:31.807475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:27:31.807491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 03:27:31.807526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-05 03:27:31.807543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-05 03:27:31.807559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-05 03:27:31.807576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:27:31.807609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:27:31.807626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-05 03:27:31.807642 | orchestrator | 2026-02-05 03:27:31.807659 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-05 03:27:31.807675 | orchestrator | Thursday 05 February 2026 03:26:49 +0000 (0:00:03.289) 0:01:09.504 ***** 2026-02-05 03:27:31.807690 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:27:31.807707 | orchestrator | 2026-02-05 03:27:31.807722 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-05 03:27:31.807736 | orchestrator | Thursday 05 February 2026 03:26:51 +0000 (0:00:02.278) 0:01:11.782 ***** 2026-02-05 03:27:31.807752 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:27:31.807768 | orchestrator | 2026-02-05 03:27:31.807786 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-05 03:27:31.807801 | orchestrator | Thursday 05 February 2026 03:26:54 +0000 (0:00:02.375) 0:01:14.157 ***** 2026-02-05 03:27:31.807817 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:27:31.807834 | orchestrator | 2026-02-05 03:27:31.807962 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-05 03:27:31.807985 | orchestrator | Thursday 05 February 2026 03:27:31 +0000 (0:00:37.242) 0:01:51.400 ***** 2026-02-05 03:27:31.808001 | orchestrator | 2026-02-05 03:27:31.808028 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-05 03:28:15.367157 | orchestrator | Thursday 05 February 2026 03:27:31 +0000 (0:00:00.089) 0:01:51.489 ***** 2026-02-05 03:28:15.367285 | orchestrator | 2026-02-05 03:28:15.367325 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-05 03:28:15.367341 | orchestrator | Thursday 05 February 2026 03:27:31 +0000 (0:00:00.073) 0:01:51.562 ***** 2026-02-05 03:28:15.367355 | orchestrator | 2026-02-05 03:28:15.367369 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-05 03:28:15.367384 | orchestrator | Thursday 05 February 2026 03:27:31 +0000 (0:00:00.072) 0:01:51.635 ***** 2026-02-05 03:28:15.367398 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:28:15.367415 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:28:15.367432 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:28:15.367446 | orchestrator | 2026-02-05 03:28:15.367462 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-05 03:28:15.367511 | orchestrator | Thursday 05 February 2026 03:27:46 +0000 (0:00:14.255) 0:02:05.890 ***** 2026-02-05 03:28:15.367528 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:28:15.367543 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:28:15.367557 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:28:15.367571 | orchestrator | 2026-02-05 03:28:15.367585 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-05 03:28:15.367599 | orchestrator | Thursday 05 February 2026 03:27:51 +0000 (0:00:05.585) 0:02:11.476 ***** 2026-02-05 03:28:15.367613 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:28:15.367627 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:28:15.367641 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:28:15.367655 | orchestrator | 2026-02-05 03:28:15.367670 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-05 03:28:15.367684 | orchestrator | Thursday 05 February 2026 03:28:01 +0000 (0:00:09.865) 0:02:21.342 ***** 2026-02-05 03:28:15.367698 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:28:15.367712 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:28:15.367726 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:28:15.367739 | orchestrator | 2026-02-05 03:28:15.367754 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:28:15.367769 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 03:28:15.367784 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 03:28:15.367799 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 03:28:15.367813 | orchestrator | 2026-02-05 03:28:15.367828 | orchestrator | 2026-02-05 03:28:15.367870 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:28:15.367885 | orchestrator | Thursday 05 February 2026 03:28:14 +0000 (0:00:13.417) 0:02:34.759 ***** 2026-02-05 03:28:15.367901 | orchestrator | =============================================================================== 2026-02-05 03:28:15.367915 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 37.24s 2026-02-05 03:28:15.367949 | orchestrator | manila : Restart manila-api container ---------------------------------- 14.26s 2026-02-05 03:28:15.367964 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.45s 2026-02-05 03:28:15.367979 | orchestrator | manila : Restart manila-share container -------------------------------- 13.42s 2026-02-05 03:28:15.367993 | orchestrator | manila : Restart manila-scheduler container ----------------------------- 9.87s 2026-02-05 03:28:15.368007 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.91s 2026-02-05 03:28:15.368022 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.48s 2026-02-05 03:28:15.368036 | orchestrator | manila : Restart manila-data container ---------------------------------- 5.59s 2026-02-05 03:28:15.368050 | orchestrator | manila : Copying over config.json files for services -------------------- 4.60s 2026-02-05 03:28:15.368065 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.06s 2026-02-05 03:28:15.368079 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.94s 2026-02-05 03:28:15.368094 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.91s 2026-02-05 03:28:15.368109 | orchestrator | manila : Check manila containers ---------------------------------------- 3.29s 2026-02-05 03:28:15.368123 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.28s 2026-02-05 03:28:15.368138 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.24s 2026-02-05 03:28:15.368152 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.21s 2026-02-05 03:28:15.368167 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.38s 2026-02-05 03:28:15.368193 | orchestrator | manila : Creating Manila database --------------------------------------- 2.28s 2026-02-05 03:28:15.368208 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.26s 2026-02-05 03:28:15.368223 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.76s 2026-02-05 03:28:15.676593 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-05 03:28:27.855955 | orchestrator | 2026-02-05 03:28:27 | INFO  | Task ff5b50ac-2340-4e01-be53-2f0f673ce062 (netdata) was prepared for execution. 2026-02-05 03:28:27.856088 | orchestrator | 2026-02-05 03:28:27 | INFO  | It takes a moment until task ff5b50ac-2340-4e01-be53-2f0f673ce062 (netdata) has been started and output is visible here. 2026-02-05 03:29:53.066594 | orchestrator | 2026-02-05 03:29:53.066739 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:29:53.066755 | orchestrator | 2026-02-05 03:29:53.066766 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:29:53.066776 | orchestrator | Thursday 05 February 2026 03:28:32 +0000 (0:00:00.230) 0:00:00.230 ***** 2026-02-05 03:29:53.066802 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-05 03:29:53.066813 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-05 03:29:53.066823 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-05 03:29:53.066877 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-05 03:29:53.066888 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-05 03:29:53.066898 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-05 03:29:53.066908 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-05 03:29:53.066917 | orchestrator | 2026-02-05 03:29:53.066927 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-05 03:29:53.066936 | orchestrator | 2026-02-05 03:29:53.066946 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-05 03:29:53.066956 | orchestrator | Thursday 05 February 2026 03:28:33 +0000 (0:00:00.938) 0:00:01.169 ***** 2026-02-05 03:29:53.066968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 03:29:53.066980 | orchestrator | 2026-02-05 03:29:53.066990 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-05 03:29:53.067000 | orchestrator | Thursday 05 February 2026 03:28:34 +0000 (0:00:01.301) 0:00:02.470 ***** 2026-02-05 03:29:53.067010 | orchestrator | ok: [testbed-manager] 2026-02-05 03:29:53.067021 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:29:53.067031 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:29:53.067040 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:29:53.067050 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:29:53.067059 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:29:53.067069 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:29:53.067079 | orchestrator | 2026-02-05 03:29:53.067088 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-05 03:29:53.067098 | orchestrator | Thursday 05 February 2026 03:28:36 +0000 (0:00:01.814) 0:00:04.285 ***** 2026-02-05 03:29:53.067108 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:29:53.067117 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:29:53.067127 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:29:53.067138 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:29:53.067149 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:29:53.067161 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:29:53.067173 | orchestrator | ok: [testbed-manager] 2026-02-05 03:29:53.067184 | orchestrator | 2026-02-05 03:29:53.067196 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-05 03:29:53.067233 | orchestrator | Thursday 05 February 2026 03:28:38 +0000 (0:00:02.186) 0:00:06.471 ***** 2026-02-05 03:29:53.067244 | orchestrator | changed: [testbed-manager] 2026-02-05 03:29:53.067256 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:29:53.067281 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:29:53.067293 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:29:53.067305 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:29:53.067316 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:29:53.067327 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:29:53.067339 | orchestrator | 2026-02-05 03:29:53.067351 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-05 03:29:53.067363 | orchestrator | Thursday 05 February 2026 03:28:40 +0000 (0:00:01.528) 0:00:08.000 ***** 2026-02-05 03:29:53.067374 | orchestrator | changed: [testbed-manager] 2026-02-05 03:29:53.067385 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:29:53.067396 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:29:53.067407 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:29:53.067418 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:29:53.067429 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:29:53.067441 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:29:53.067452 | orchestrator | 2026-02-05 03:29:53.067464 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-05 03:29:53.067476 | orchestrator | Thursday 05 February 2026 03:28:55 +0000 (0:00:15.413) 0:00:23.414 ***** 2026-02-05 03:29:53.067488 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:29:53.067497 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:29:53.067507 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:29:53.067516 | orchestrator | changed: [testbed-manager] 2026-02-05 03:29:53.067526 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:29:53.067535 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:29:53.067545 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:29:53.067554 | orchestrator | 2026-02-05 03:29:53.067564 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-05 03:29:53.067574 | orchestrator | Thursday 05 February 2026 03:29:35 +0000 (0:00:39.820) 0:01:03.235 ***** 2026-02-05 03:29:53.067584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 03:29:53.067597 | orchestrator | 2026-02-05 03:29:53.067606 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-05 03:29:53.067616 | orchestrator | Thursday 05 February 2026 03:29:36 +0000 (0:00:01.555) 0:01:04.790 ***** 2026-02-05 03:29:53.067626 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-05 03:29:53.067636 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-05 03:29:53.067645 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-05 03:29:53.067655 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-05 03:29:53.067682 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-05 03:29:53.067692 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-05 03:29:53.067702 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-05 03:29:53.067711 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-05 03:29:53.067721 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-05 03:29:53.067731 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-05 03:29:53.067740 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-05 03:29:53.067750 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-05 03:29:53.067759 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-05 03:29:53.067769 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-05 03:29:53.067778 | orchestrator | 2026-02-05 03:29:53.067788 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-05 03:29:53.067806 | orchestrator | Thursday 05 February 2026 03:29:40 +0000 (0:00:03.404) 0:01:08.194 ***** 2026-02-05 03:29:53.067816 | orchestrator | ok: [testbed-manager] 2026-02-05 03:29:53.067826 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:29:53.067853 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:29:53.067862 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:29:53.067872 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:29:53.067882 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:29:53.067891 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:29:53.067901 | orchestrator | 2026-02-05 03:29:53.067910 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-05 03:29:53.067920 | orchestrator | Thursday 05 February 2026 03:29:41 +0000 (0:00:01.240) 0:01:09.435 ***** 2026-02-05 03:29:53.067930 | orchestrator | changed: [testbed-manager] 2026-02-05 03:29:53.067939 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:29:53.067949 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:29:53.067959 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:29:53.067969 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:29:53.067979 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:29:53.067988 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:29:53.067998 | orchestrator | 2026-02-05 03:29:53.068007 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-05 03:29:53.068017 | orchestrator | Thursday 05 February 2026 03:29:42 +0000 (0:00:01.279) 0:01:10.714 ***** 2026-02-05 03:29:53.068027 | orchestrator | ok: [testbed-manager] 2026-02-05 03:29:53.068036 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:29:53.068046 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:29:53.068055 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:29:53.068065 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:29:53.068074 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:29:53.068084 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:29:53.068093 | orchestrator | 2026-02-05 03:29:53.068103 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-05 03:29:53.068113 | orchestrator | Thursday 05 February 2026 03:29:44 +0000 (0:00:01.254) 0:01:11.969 ***** 2026-02-05 03:29:53.068122 | orchestrator | ok: [testbed-manager] 2026-02-05 03:29:53.068132 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:29:53.068141 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:29:53.068151 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:29:53.068160 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:29:53.068170 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:29:53.068179 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:29:53.068189 | orchestrator | 2026-02-05 03:29:53.068198 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-05 03:29:53.068213 | orchestrator | Thursday 05 February 2026 03:29:45 +0000 (0:00:01.682) 0:01:13.651 ***** 2026-02-05 03:29:53.068223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-05 03:29:53.068234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 03:29:53.068244 | orchestrator | 2026-02-05 03:29:53.068254 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-05 03:29:53.068264 | orchestrator | Thursday 05 February 2026 03:29:47 +0000 (0:00:01.352) 0:01:15.003 ***** 2026-02-05 03:29:53.068273 | orchestrator | changed: [testbed-manager] 2026-02-05 03:29:53.068283 | orchestrator | 2026-02-05 03:29:53.068293 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-05 03:29:53.068302 | orchestrator | Thursday 05 February 2026 03:29:49 +0000 (0:00:02.057) 0:01:17.061 ***** 2026-02-05 03:29:53.068312 | orchestrator | changed: [testbed-manager] 2026-02-05 03:29:53.068322 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:29:53.068332 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:29:53.068347 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:29:53.068357 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:29:53.068366 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:29:53.068376 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:29:53.068385 | orchestrator | 2026-02-05 03:29:53.068395 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:29:53.068405 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:29:53.068416 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:29:53.068426 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:29:53.068436 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:29:53.068452 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:29:53.360445 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:29:53.360534 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:29:53.360541 | orchestrator | 2026-02-05 03:29:53.360546 | orchestrator | 2026-02-05 03:29:53.360551 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:29:53.360557 | orchestrator | Thursday 05 February 2026 03:29:53 +0000 (0:00:03.866) 0:01:20.927 ***** 2026-02-05 03:29:53.360561 | orchestrator | =============================================================================== 2026-02-05 03:29:53.360565 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.82s 2026-02-05 03:29:53.360569 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.41s 2026-02-05 03:29:53.360573 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.87s 2026-02-05 03:29:53.360576 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.40s 2026-02-05 03:29:53.360580 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.19s 2026-02-05 03:29:53.360584 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.06s 2026-02-05 03:29:53.360587 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.81s 2026-02-05 03:29:53.360591 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.68s 2026-02-05 03:29:53.360595 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.56s 2026-02-05 03:29:53.360599 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.53s 2026-02-05 03:29:53.360602 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.35s 2026-02-05 03:29:53.360606 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.30s 2026-02-05 03:29:53.360610 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.28s 2026-02-05 03:29:53.360614 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.25s 2026-02-05 03:29:53.360617 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.24s 2026-02-05 03:29:53.360622 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-02-05 03:29:55.447082 | orchestrator | 2026-02-05 03:29:55 | INFO  | Task e31efb2e-3a62-4f53-8b4a-7a0dd40c1525 (prometheus) was prepared for execution. 2026-02-05 03:29:55.447175 | orchestrator | 2026-02-05 03:29:55 | INFO  | It takes a moment until task e31efb2e-3a62-4f53-8b4a-7a0dd40c1525 (prometheus) has been started and output is visible here. 2026-02-05 03:30:04.857368 | orchestrator | 2026-02-05 03:30:04.857468 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:30:04.857481 | orchestrator | 2026-02-05 03:30:04.857491 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:30:04.857499 | orchestrator | Thursday 05 February 2026 03:29:59 +0000 (0:00:00.280) 0:00:00.280 ***** 2026-02-05 03:30:04.857506 | orchestrator | ok: [testbed-manager] 2026-02-05 03:30:04.857515 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:30:04.857522 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:30:04.857530 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:30:04.857538 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:30:04.857545 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:30:04.857552 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:30:04.857560 | orchestrator | 2026-02-05 03:30:04.857566 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:30:04.857573 | orchestrator | Thursday 05 February 2026 03:30:00 +0000 (0:00:00.992) 0:00:01.273 ***** 2026-02-05 03:30:04.857581 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-05 03:30:04.857589 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-05 03:30:04.857595 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-05 03:30:04.857602 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-05 03:30:04.857609 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-05 03:30:04.857617 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-05 03:30:04.857624 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-05 03:30:04.857631 | orchestrator | 2026-02-05 03:30:04.857638 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-05 03:30:04.857646 | orchestrator | 2026-02-05 03:30:04.857653 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-05 03:30:04.857660 | orchestrator | Thursday 05 February 2026 03:30:01 +0000 (0:00:00.931) 0:00:02.205 ***** 2026-02-05 03:30:04.857669 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 03:30:04.857679 | orchestrator | 2026-02-05 03:30:04.857686 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-05 03:30:04.857694 | orchestrator | Thursday 05 February 2026 03:30:03 +0000 (0:00:01.417) 0:00:03.623 ***** 2026-02-05 03:30:04.857704 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 03:30:04.857717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:04.857725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:04.857753 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:04.857782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:04.857791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:04.857798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:04.857806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:04.857814 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:04.857823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:04.857970 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:04.858054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:05.799880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:05.799964 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:05.799978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:05.799995 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 03:30:05.800014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:05.800046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:05.800091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:05.800104 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 03:30:05.800115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:05.800125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:05.800135 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:05.800145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:05.800174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 03:30:05.800185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 03:30:05.800209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:10.685688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:10.685766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:10.685799 | orchestrator | 2026-02-05 03:30:10.685806 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-05 03:30:10.685813 | orchestrator | Thursday 05 February 2026 03:30:05 +0000 (0:00:02.732) 0:00:06.355 ***** 2026-02-05 03:30:10.685817 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 03:30:10.685823 | orchestrator | 2026-02-05 03:30:10.685827 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-05 03:30:10.685889 | orchestrator | Thursday 05 February 2026 03:30:07 +0000 (0:00:01.590) 0:00:07.946 ***** 2026-02-05 03:30:10.685895 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 03:30:10.685919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:10.685924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:10.685939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:10.685956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:10.685960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:10.685964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:10.685968 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:10.685990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:10.685994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:10.685998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:10.686006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:10.686052 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:12.956919 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:12.957045 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:12.957137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:12.957155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:12.957167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:12.957179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 03:30:12.957208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 03:30:12.957308 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 03:30:12.957338 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 03:30:12.957350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:12.957362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:12.957374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:12.957393 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:12.957405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:12.957426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:14.252573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:14.252706 | orchestrator | 2026-02-05 03:30:14.252730 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-05 03:30:14.252749 | orchestrator | Thursday 05 February 2026 03:30:12 +0000 (0:00:05.566) 0:00:13.512 ***** 2026-02-05 03:30:14.252769 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 03:30:14.252789 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:14.252808 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:14.252932 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 03:30:14.252970 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:14.252993 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:30:14.253006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:14.253017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:14.253028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:14.253040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:14.253050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:14.253061 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:30:14.253077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:14.253090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:14.253119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:14.458216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:14.458333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:14.458360 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:30:14.458380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:14.458399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:14.458418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 03:30:14.458456 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:30:14.458474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:14.458518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:14.458559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:14.458578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:14.458598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:14.458617 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:30:14.458635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:14.458655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:14.458680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 03:30:14.458707 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:30:14.458724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:14.458753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:15.358231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 03:30:15.358361 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:30:15.358387 | orchestrator | 2026-02-05 03:30:15.358408 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-05 03:30:15.358428 | orchestrator | Thursday 05 February 2026 03:30:14 +0000 (0:00:01.515) 0:00:15.027 ***** 2026-02-05 03:30:15.358449 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 03:30:15.358470 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:15.358491 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:15.358569 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 03:30:15.358609 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:15.358623 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:30:15.358635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:15.358647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:15.358661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:15.358674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:15.358687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:15.358714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:15.358729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:15.358750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:16.586896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:16.587007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:16.587024 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:30:16.587038 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:30:16.587050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:16.587062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:16.587122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:16.587136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:16.587147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 03:30:16.587158 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:30:16.587232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:16.587265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:16.587284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 03:30:16.587302 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:30:16.587321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:16.587352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:16.587380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 03:30:16.587399 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:30:16.587419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 03:30:16.587449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 03:30:20.199175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 03:30:20.199254 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:30:20.199263 | orchestrator | 2026-02-05 03:30:20.199269 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-05 03:30:20.199278 | orchestrator | Thursday 05 February 2026 03:30:16 +0000 (0:00:02.108) 0:00:17.136 ***** 2026-02-05 03:30:20.199286 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 03:30:20.199323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:20.199346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:20.199355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:20.199364 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:20.199380 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:20.199385 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:20.199390 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:30:20.199394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:20.199403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:20.199411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:20.199416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:20.199422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:20.199431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:22.581696 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:22.581807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:22.581897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:22.581922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:22.581959 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 03:30:22.581972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 03:30:22.581983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 03:30:22.582070 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 03:30:22.582121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:22.582134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:22.582146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:30:22.582164 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:22.582191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:22.582203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:22.582226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:30:26.368281 | orchestrator | 2026-02-05 03:30:26.368410 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-05 03:30:26.368431 | orchestrator | Thursday 05 February 2026 03:30:22 +0000 (0:00:05.996) 0:00:23.133 ***** 2026-02-05 03:30:26.368446 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 03:30:26.368462 | orchestrator | 2026-02-05 03:30:26.368477 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-05 03:30:26.368491 | orchestrator | Thursday 05 February 2026 03:30:23 +0000 (0:00:00.875) 0:00:24.008 ***** 2026-02-05 03:30:26.368504 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1315182, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:26.368516 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1315182, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:26.368544 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1315182, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:26.368560 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1315182, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:26.368573 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1315182, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:26.368587 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1315182, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:30:26.368634 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1315230, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9671152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:26.368650 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1315182, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:26.368664 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1315230, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9671152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:26.368683 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1315230, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9671152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:26.368692 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1315230, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9671152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:26.368701 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1315170, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.961704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:26.368709 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1315230, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9671152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:26.368730 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1315170, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.961704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.099707 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1315170, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.961704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.099913 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1315230, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9671152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.099965 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1315170, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.961704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.099983 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1315170, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.961704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.099998 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1315170, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.961704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.100013 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1315213, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.965612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.100054 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1315230, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9671152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:30:28.100169 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1315213, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.965612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.100189 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1315213, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.965612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.100210 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1315213, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.965612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.100223 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1315213, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.965612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.100236 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1315213, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.965612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.100259 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1315163, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9602802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.100272 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1315163, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9602802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:28.100294 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1315163, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9602802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285739 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1315163, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9602802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285817 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1315163, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9602802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285892 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1315163, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9602802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285902 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1315186, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.962815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285925 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1315186, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.962815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285931 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1315186, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.962815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285937 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1315209, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285957 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1315186, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.962815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285963 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1315186, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.962815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285973 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1315209, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285979 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1315209, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285990 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1315186, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.962815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.285996 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1315209, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.286002 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1315188, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.963102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:29.286055 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1315188, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.963102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.661890 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1315188, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.963102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.661992 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1315170, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.961704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:30:30.662004 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1315178, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.662080 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1315209, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.662089 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1315188, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.963102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.662097 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1315209, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.662105 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1315178, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.662151 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1315178, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.662166 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315228, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9669182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.662180 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1315178, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.662188 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1315188, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.963102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.662195 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1315188, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.963102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.662203 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315228, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9669182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.662211 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315228, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9669182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:30.662225 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315157, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9596102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123254 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315228, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9669182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123354 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1315178, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123364 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1315213, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.965612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:30:32.123371 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315157, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9596102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123379 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1315178, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123386 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315157, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9596102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123393 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1315260, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123422 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315228, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9669182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123435 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315157, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9596102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123442 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1315260, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123448 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1315260, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123455 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315228, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9669182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123461 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1315225, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9666915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123470 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1315225, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9666915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:32.123491 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315157, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9596102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.401668 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315157, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9596102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.401814 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315166, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9604335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.401876 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1315225, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9666915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.401898 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1315260, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.401911 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1315260, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.401923 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1315163, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9602802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:30:33.401974 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1315161, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9599676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.402009 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315166, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9604335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.402085 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315166, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9604335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.402107 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1315225, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9666915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.402128 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1315260, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.402149 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1315225, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9666915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.402171 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1315161, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9599676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.402216 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1315205, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:33.402252 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1315161, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9599676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.621825 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1315225, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9666915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.621928 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315166, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9604335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.621939 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315166, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9604335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.621946 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1315161, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9599676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.621953 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1315194, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.964067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.621992 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1315205, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.622000 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1315205, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.622055 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1315161, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9599676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.622064 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1315186, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.962815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:30:34.622071 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1315205, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.622078 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1315194, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.964067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.622095 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1315253, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.622105 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:30:34.622129 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315166, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9604335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.622141 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1315205, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:34.622158 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1315194, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.964067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:41.226538 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1315253, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:41.226652 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:30:41.226661 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1315161, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9599676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:41.226668 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1315194, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.964067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:41.226692 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1315194, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.964067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:41.226711 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1315253, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:41.226716 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:30:41.226721 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1315205, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:41.226737 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1315253, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:41.226742 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:30:41.226748 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1315253, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:41.226755 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:30:41.226762 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1315194, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.964067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:41.226776 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1315209, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:30:41.226783 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1315253, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 03:30:41.226790 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:30:41.226800 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1315188, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.963102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:30:41.226807 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1315178, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9621322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:30:41.226820 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315228, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9669182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:31:08.409930 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315157, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9596102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:31:08.410108 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1315260, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:31:08.410149 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1315225, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9666915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:31:08.410170 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1315166, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9604335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:31:08.410208 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1315161, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9599676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:31:08.410223 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1315205, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9647286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:31:08.410237 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1315194, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.964067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:31:08.410269 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1315253, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.971115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 03:31:08.410284 | orchestrator | 2026-02-05 03:31:08.410310 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-05 03:31:08.410326 | orchestrator | Thursday 05 February 2026 03:30:48 +0000 (0:00:25.202) 0:00:49.210 ***** 2026-02-05 03:31:08.410340 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 03:31:08.410356 | orchestrator | 2026-02-05 03:31:08.410370 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-05 03:31:08.410384 | orchestrator | Thursday 05 February 2026 03:30:49 +0000 (0:00:00.811) 0:00:50.022 ***** 2026-02-05 03:31:08.410400 | orchestrator | [WARNING]: Skipped 2026-02-05 03:31:08.410416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.410433 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-05 03:31:08.410449 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.410465 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-05 03:31:08.410480 | orchestrator | [WARNING]: Skipped 2026-02-05 03:31:08.410497 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.410513 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-05 03:31:08.410530 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.410542 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-05 03:31:08.410553 | orchestrator | [WARNING]: Skipped 2026-02-05 03:31:08.410564 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.410575 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-05 03:31:08.410586 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.410594 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-05 03:31:08.410681 | orchestrator | [WARNING]: Skipped 2026-02-05 03:31:08.410700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.410715 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-05 03:31:08.410729 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.410743 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-05 03:31:08.410756 | orchestrator | [WARNING]: Skipped 2026-02-05 03:31:08.410770 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.410882 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-05 03:31:08.410903 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.410918 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-05 03:31:08.410950 | orchestrator | [WARNING]: Skipped 2026-02-05 03:31:08.410961 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.410976 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-05 03:31:08.410990 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.411005 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-05 03:31:08.411020 | orchestrator | [WARNING]: Skipped 2026-02-05 03:31:08.411034 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.411048 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-05 03:31:08.411062 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 03:31:08.411076 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-05 03:31:08.411090 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 03:31:08.411104 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 03:31:08.411119 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 03:31:08.411134 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 03:31:08.411163 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 03:31:08.411178 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 03:31:08.411193 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 03:31:08.411208 | orchestrator | 2026-02-05 03:31:08.411222 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-05 03:31:08.411237 | orchestrator | Thursday 05 February 2026 03:30:51 +0000 (0:00:01.798) 0:00:51.821 ***** 2026-02-05 03:31:08.411252 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 03:31:08.411268 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:31:08.411283 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 03:31:08.411295 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:31:08.411308 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 03:31:08.411322 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:31:08.411352 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 03:31:25.525570 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:31:25.525749 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 03:31:25.525770 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:31:25.525783 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 03:31:25.525795 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:31:25.525806 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-05 03:31:25.525818 | orchestrator | 2026-02-05 03:31:25.525866 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-05 03:31:25.525881 | orchestrator | Thursday 05 February 2026 03:31:08 +0000 (0:00:17.151) 0:01:08.972 ***** 2026-02-05 03:31:25.525893 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 03:31:25.525904 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:31:25.525916 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 03:31:25.525927 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:31:25.525938 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 03:31:25.525949 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:31:25.525960 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 03:31:25.525971 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:31:25.525985 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 03:31:25.525997 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:31:25.526011 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 03:31:25.526089 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:31:25.526103 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-05 03:31:25.526116 | orchestrator | 2026-02-05 03:31:25.526134 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-05 03:31:25.526157 | orchestrator | Thursday 05 February 2026 03:31:11 +0000 (0:00:02.818) 0:01:11.790 ***** 2026-02-05 03:31:25.526178 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 03:31:25.526200 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:31:25.526220 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 03:31:25.526241 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:31:25.526297 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 03:31:25.526317 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:31:25.526338 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 03:31:25.526360 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:31:25.526399 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-05 03:31:25.526421 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 03:31:25.526441 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:31:25.526461 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 03:31:25.526477 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:31:25.526488 | orchestrator | 2026-02-05 03:31:25.526500 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-05 03:31:25.526511 | orchestrator | Thursday 05 February 2026 03:31:13 +0000 (0:00:01.858) 0:01:13.649 ***** 2026-02-05 03:31:25.526522 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 03:31:25.526535 | orchestrator | 2026-02-05 03:31:25.526554 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-05 03:31:25.526574 | orchestrator | Thursday 05 February 2026 03:31:13 +0000 (0:00:00.775) 0:01:14.424 ***** 2026-02-05 03:31:25.526591 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:31:25.526610 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:31:25.526628 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:31:25.526646 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:31:25.526662 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:31:25.526678 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:31:25.526695 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:31:25.526713 | orchestrator | 2026-02-05 03:31:25.526732 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-05 03:31:25.526750 | orchestrator | Thursday 05 February 2026 03:31:14 +0000 (0:00:00.807) 0:01:15.231 ***** 2026-02-05 03:31:25.526768 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:31:25.526785 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:31:25.526802 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:31:25.526819 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:31:25.526865 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:31:25.526884 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:31:25.526902 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:31:25.526919 | orchestrator | 2026-02-05 03:31:25.526937 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-05 03:31:25.527015 | orchestrator | Thursday 05 February 2026 03:31:16 +0000 (0:00:02.151) 0:01:17.383 ***** 2026-02-05 03:31:25.527039 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 03:31:25.527054 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 03:31:25.527065 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:31:25.527076 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 03:31:25.527087 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 03:31:25.527098 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:31:25.527108 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:31:25.527119 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:31:25.527130 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 03:31:25.527140 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:31:25.527166 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 03:31:25.527178 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:31:25.527189 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 03:31:25.527199 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:31:25.527210 | orchestrator | 2026-02-05 03:31:25.527221 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-05 03:31:25.527232 | orchestrator | Thursday 05 February 2026 03:31:18 +0000 (0:00:01.543) 0:01:18.927 ***** 2026-02-05 03:31:25.527243 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 03:31:25.527254 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 03:31:25.527265 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:31:25.527276 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:31:25.527289 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 03:31:25.527308 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:31:25.527326 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 03:31:25.527344 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:31:25.527362 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 03:31:25.527378 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:31:25.527395 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 03:31:25.527412 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:31:25.527429 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-05 03:31:25.527448 | orchestrator | 2026-02-05 03:31:25.527465 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-05 03:31:25.527483 | orchestrator | Thursday 05 February 2026 03:31:19 +0000 (0:00:01.580) 0:01:20.507 ***** 2026-02-05 03:31:25.527513 | orchestrator | [WARNING]: Skipped 2026-02-05 03:31:25.527536 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-05 03:31:25.527557 | orchestrator | due to this access issue: 2026-02-05 03:31:25.527575 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-05 03:31:25.527593 | orchestrator | not a directory 2026-02-05 03:31:25.527610 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 03:31:25.527627 | orchestrator | 2026-02-05 03:31:25.527644 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-05 03:31:25.527661 | orchestrator | Thursday 05 February 2026 03:31:21 +0000 (0:00:01.185) 0:01:21.692 ***** 2026-02-05 03:31:25.527678 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:31:25.527697 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:31:25.527717 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:31:25.527736 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:31:25.527755 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:31:25.527774 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:31:25.527790 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:31:25.527801 | orchestrator | 2026-02-05 03:31:25.527812 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-05 03:31:25.527851 | orchestrator | Thursday 05 February 2026 03:31:22 +0000 (0:00:00.969) 0:01:22.661 ***** 2026-02-05 03:31:25.527863 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:31:25.527875 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:31:25.527885 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:31:25.527896 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:31:25.527919 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:31:25.527930 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:31:25.527941 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:31:25.527952 | orchestrator | 2026-02-05 03:31:25.527963 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-05 03:31:25.527974 | orchestrator | Thursday 05 February 2026 03:31:23 +0000 (0:00:00.942) 0:01:23.604 ***** 2026-02-05 03:31:25.528003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:31:27.349030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:31:27.349143 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 03:31:27.349160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:31:27.349192 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:31:27.349204 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:31:27.349216 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:31:27.349252 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 03:31:27.349288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:31:27.349318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:31:27.349341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:31:27.349361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:31:27.349394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:31:27.349421 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:31:27.349455 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:31:27.349489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:31:29.391549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:31:29.391657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 03:31:29.391674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:31:29.391686 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 03:31:29.391717 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 03:31:29.391756 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 03:31:29.391788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:31:29.391802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:31:29.391813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 03:31:29.391877 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:31:29.391896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:31:29.391908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:31:29.391928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 03:31:29.391940 | orchestrator | 2026-02-05 03:31:29.391954 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-05 03:31:29.391966 | orchestrator | Thursday 05 February 2026 03:31:27 +0000 (0:00:04.311) 0:01:27.915 ***** 2026-02-05 03:31:29.391978 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-05 03:31:29.391990 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:31:29.392002 | orchestrator | 2026-02-05 03:31:29.392012 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 03:31:29.392024 | orchestrator | Thursday 05 February 2026 03:31:28 +0000 (0:00:01.309) 0:01:29.225 ***** 2026-02-05 03:31:29.392035 | orchestrator | 2026-02-05 03:31:29.392046 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 03:31:29.392057 | orchestrator | Thursday 05 February 2026 03:31:28 +0000 (0:00:00.250) 0:01:29.476 ***** 2026-02-05 03:31:29.392067 | orchestrator | 2026-02-05 03:31:29.392079 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 03:31:29.392092 | orchestrator | Thursday 05 February 2026 03:31:28 +0000 (0:00:00.073) 0:01:29.550 ***** 2026-02-05 03:31:29.392105 | orchestrator | 2026-02-05 03:31:29.392117 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 03:31:29.392139 | orchestrator | Thursday 05 February 2026 03:31:29 +0000 (0:00:00.072) 0:01:29.622 ***** 2026-02-05 03:33:19.470674 | orchestrator | 2026-02-05 03:33:19.470764 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 03:33:19.470775 | orchestrator | Thursday 05 February 2026 03:31:29 +0000 (0:00:00.076) 0:01:29.698 ***** 2026-02-05 03:33:19.470782 | orchestrator | 2026-02-05 03:33:19.470789 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 03:33:19.470860 | orchestrator | Thursday 05 February 2026 03:31:29 +0000 (0:00:00.068) 0:01:29.767 ***** 2026-02-05 03:33:19.470868 | orchestrator | 2026-02-05 03:33:19.470874 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 03:33:19.470881 | orchestrator | Thursday 05 February 2026 03:31:29 +0000 (0:00:00.067) 0:01:29.834 ***** 2026-02-05 03:33:19.470887 | orchestrator | 2026-02-05 03:33:19.470894 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-05 03:33:19.470901 | orchestrator | Thursday 05 February 2026 03:31:29 +0000 (0:00:00.108) 0:01:29.943 ***** 2026-02-05 03:33:19.470907 | orchestrator | changed: [testbed-manager] 2026-02-05 03:33:19.470914 | orchestrator | 2026-02-05 03:33:19.470920 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-05 03:33:19.470926 | orchestrator | Thursday 05 February 2026 03:31:51 +0000 (0:00:21.710) 0:01:51.654 ***** 2026-02-05 03:33:19.470932 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:33:19.470939 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:33:19.470945 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:33:19.470952 | orchestrator | changed: [testbed-manager] 2026-02-05 03:33:19.470958 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:33:19.470986 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:33:19.470993 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:33:19.470999 | orchestrator | 2026-02-05 03:33:19.471005 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-05 03:33:19.471011 | orchestrator | Thursday 05 February 2026 03:32:05 +0000 (0:00:14.112) 0:02:05.766 ***** 2026-02-05 03:33:19.471018 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:33:19.471024 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:33:19.471030 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:33:19.471036 | orchestrator | 2026-02-05 03:33:19.471042 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-05 03:33:19.471049 | orchestrator | Thursday 05 February 2026 03:32:15 +0000 (0:00:10.523) 0:02:16.290 ***** 2026-02-05 03:33:19.471056 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:33:19.471062 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:33:19.471068 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:33:19.471074 | orchestrator | 2026-02-05 03:33:19.471080 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-05 03:33:19.471086 | orchestrator | Thursday 05 February 2026 03:32:26 +0000 (0:00:10.606) 0:02:26.896 ***** 2026-02-05 03:33:19.471093 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:33:19.471099 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:33:19.471105 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:33:19.471111 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:33:19.471117 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:33:19.471123 | orchestrator | changed: [testbed-manager] 2026-02-05 03:33:19.471129 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:33:19.471135 | orchestrator | 2026-02-05 03:33:19.471154 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-05 03:33:19.471161 | orchestrator | Thursday 05 February 2026 03:32:39 +0000 (0:00:13.190) 0:02:40.086 ***** 2026-02-05 03:33:19.471167 | orchestrator | changed: [testbed-manager] 2026-02-05 03:33:19.471173 | orchestrator | 2026-02-05 03:33:19.471179 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-05 03:33:19.471186 | orchestrator | Thursday 05 February 2026 03:32:47 +0000 (0:00:08.345) 0:02:48.432 ***** 2026-02-05 03:33:19.471192 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:33:19.471198 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:33:19.471204 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:33:19.471210 | orchestrator | 2026-02-05 03:33:19.471216 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-05 03:33:19.471222 | orchestrator | Thursday 05 February 2026 03:32:58 +0000 (0:00:10.412) 0:02:58.845 ***** 2026-02-05 03:33:19.471228 | orchestrator | changed: [testbed-manager] 2026-02-05 03:33:19.471234 | orchestrator | 2026-02-05 03:33:19.471242 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-05 03:33:19.471249 | orchestrator | Thursday 05 February 2026 03:33:08 +0000 (0:00:10.473) 0:03:09.318 ***** 2026-02-05 03:33:19.471257 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:33:19.471264 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:33:19.471271 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:33:19.471279 | orchestrator | 2026-02-05 03:33:19.471286 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:33:19.471294 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-05 03:33:19.471303 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 03:33:19.471310 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 03:33:19.471318 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 03:33:19.471330 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 03:33:19.471351 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 03:33:19.471358 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 03:33:19.471366 | orchestrator | 2026-02-05 03:33:19.471373 | orchestrator | 2026-02-05 03:33:19.471380 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:33:19.471388 | orchestrator | Thursday 05 February 2026 03:33:18 +0000 (0:00:10.144) 0:03:19.463 ***** 2026-02-05 03:33:19.471395 | orchestrator | =============================================================================== 2026-02-05 03:33:19.471403 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.20s 2026-02-05 03:33:19.471410 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.71s 2026-02-05 03:33:19.471418 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.15s 2026-02-05 03:33:19.471425 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.11s 2026-02-05 03:33:19.471432 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.19s 2026-02-05 03:33:19.471440 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.61s 2026-02-05 03:33:19.471447 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.52s 2026-02-05 03:33:19.471454 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.47s 2026-02-05 03:33:19.471461 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.41s 2026-02-05 03:33:19.471469 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.14s 2026-02-05 03:33:19.471476 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.35s 2026-02-05 03:33:19.471483 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.00s 2026-02-05 03:33:19.471491 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.57s 2026-02-05 03:33:19.471499 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.31s 2026-02-05 03:33:19.471506 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.82s 2026-02-05 03:33:19.471513 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.73s 2026-02-05 03:33:19.471521 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.15s 2026-02-05 03:33:19.471528 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.11s 2026-02-05 03:33:19.471536 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.86s 2026-02-05 03:33:19.471543 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.80s 2026-02-05 03:33:21.942634 | orchestrator | 2026-02-05 03:33:21 | INFO  | Task 7a71dc4f-722c-43ad-b07b-452e8d4ab612 (grafana) was prepared for execution. 2026-02-05 03:33:21.944687 | orchestrator | 2026-02-05 03:33:21 | INFO  | It takes a moment until task 7a71dc4f-722c-43ad-b07b-452e8d4ab612 (grafana) has been started and output is visible here. 2026-02-05 03:33:32.098127 | orchestrator | 2026-02-05 03:33:32.098245 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:33:32.098262 | orchestrator | 2026-02-05 03:33:32.098275 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:33:32.098287 | orchestrator | Thursday 05 February 2026 03:33:26 +0000 (0:00:00.279) 0:00:00.279 ***** 2026-02-05 03:33:32.098298 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:33:32.098336 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:33:32.098348 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:33:32.098359 | orchestrator | 2026-02-05 03:33:32.098370 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:33:32.098386 | orchestrator | Thursday 05 February 2026 03:33:26 +0000 (0:00:00.331) 0:00:00.610 ***** 2026-02-05 03:33:32.098405 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-05 03:33:32.098432 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-05 03:33:32.098456 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-05 03:33:32.098475 | orchestrator | 2026-02-05 03:33:32.098493 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-05 03:33:32.098512 | orchestrator | 2026-02-05 03:33:32.098530 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-05 03:33:32.098549 | orchestrator | Thursday 05 February 2026 03:33:26 +0000 (0:00:00.461) 0:00:01.072 ***** 2026-02-05 03:33:32.098568 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:33:32.098588 | orchestrator | 2026-02-05 03:33:32.098608 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-05 03:33:32.098628 | orchestrator | Thursday 05 February 2026 03:33:27 +0000 (0:00:00.602) 0:00:01.674 ***** 2026-02-05 03:33:32.098680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:33:32.098709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:33:32.098732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:33:32.098750 | orchestrator | 2026-02-05 03:33:32.098769 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-05 03:33:32.098789 | orchestrator | Thursday 05 February 2026 03:33:28 +0000 (0:00:00.699) 0:00:02.373 ***** 2026-02-05 03:33:32.098983 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-05 03:33:32.099004 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-05 03:33:32.099018 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 03:33:32.099046 | orchestrator | 2026-02-05 03:33:32.099057 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-05 03:33:32.099068 | orchestrator | Thursday 05 February 2026 03:33:29 +0000 (0:00:01.219) 0:00:03.593 ***** 2026-02-05 03:33:32.099095 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:33:32.099107 | orchestrator | 2026-02-05 03:33:32.099118 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-05 03:33:32.099129 | orchestrator | Thursday 05 February 2026 03:33:30 +0000 (0:00:00.568) 0:00:04.161 ***** 2026-02-05 03:33:32.099163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:33:32.099176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:33:32.099187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:33:32.099199 | orchestrator | 2026-02-05 03:33:32.099210 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-05 03:33:32.099221 | orchestrator | Thursday 05 February 2026 03:33:31 +0000 (0:00:01.396) 0:00:05.558 ***** 2026-02-05 03:33:32.099232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 03:33:32.099244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 03:33:32.099263 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:33:32.099274 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:33:32.099300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 03:33:38.578411 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:33:38.578563 | orchestrator | 2026-02-05 03:33:38.578594 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-05 03:33:38.578616 | orchestrator | Thursday 05 February 2026 03:33:32 +0000 (0:00:00.635) 0:00:06.194 ***** 2026-02-05 03:33:38.578639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 03:33:38.578664 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:33:38.578684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 03:33:38.578701 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:33:38.578713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 03:33:38.578725 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:33:38.578736 | orchestrator | 2026-02-05 03:33:38.578747 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-05 03:33:38.578758 | orchestrator | Thursday 05 February 2026 03:33:32 +0000 (0:00:00.640) 0:00:06.834 ***** 2026-02-05 03:33:38.578840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:33:38.578871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:33:38.578919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:33:38.578932 | orchestrator | 2026-02-05 03:33:38.578943 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-05 03:33:38.578954 | orchestrator | Thursday 05 February 2026 03:33:33 +0000 (0:00:01.219) 0:00:08.054 ***** 2026-02-05 03:33:38.578966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:33:38.578978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:33:38.578989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:33:38.579009 | orchestrator | 2026-02-05 03:33:38.579020 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-05 03:33:38.579031 | orchestrator | Thursday 05 February 2026 03:33:35 +0000 (0:00:01.618) 0:00:09.672 ***** 2026-02-05 03:33:38.579042 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:33:38.579053 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:33:38.579064 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:33:38.579080 | orchestrator | 2026-02-05 03:33:38.579099 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-05 03:33:38.579118 | orchestrator | Thursday 05 February 2026 03:33:35 +0000 (0:00:00.342) 0:00:10.014 ***** 2026-02-05 03:33:38.579136 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-05 03:33:38.579155 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-05 03:33:38.579174 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-05 03:33:38.579194 | orchestrator | 2026-02-05 03:33:38.579213 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-05 03:33:38.579231 | orchestrator | Thursday 05 February 2026 03:33:37 +0000 (0:00:01.232) 0:00:11.246 ***** 2026-02-05 03:33:38.579256 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-05 03:33:38.579274 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-05 03:33:38.579291 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-05 03:33:38.579308 | orchestrator | 2026-02-05 03:33:38.579326 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-05 03:33:38.579434 | orchestrator | Thursday 05 February 2026 03:33:38 +0000 (0:00:01.422) 0:00:12.669 ***** 2026-02-05 03:33:45.095726 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 03:33:45.095893 | orchestrator | 2026-02-05 03:33:45.095912 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-05 03:33:45.095929 | orchestrator | Thursday 05 February 2026 03:33:39 +0000 (0:00:00.746) 0:00:13.415 ***** 2026-02-05 03:33:45.095945 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-05 03:33:45.095961 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-05 03:33:45.095976 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:33:45.095992 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:33:45.096008 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:33:45.096023 | orchestrator | 2026-02-05 03:33:45.096039 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-05 03:33:45.096054 | orchestrator | Thursday 05 February 2026 03:33:40 +0000 (0:00:00.795) 0:00:14.211 ***** 2026-02-05 03:33:45.096069 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:33:45.096079 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:33:45.096088 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:33:45.096097 | orchestrator | 2026-02-05 03:33:45.096106 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-05 03:33:45.096115 | orchestrator | Thursday 05 February 2026 03:33:40 +0000 (0:00:00.373) 0:00:14.584 ***** 2026-02-05 03:33:45.096126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1314837, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9091134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:45.096165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1314837, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9091134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:45.096175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1314837, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9091134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:45.096185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1314914, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9216642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:45.096228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1314914, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9216642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:45.096239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1314914, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9216642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:45.096250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1314859, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9111145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:45.096268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1314859, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9111145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:45.096279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1314859, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9111145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:45.096290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1314917, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9231145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:45.096305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1314917, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9231145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:45.096325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1314917, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9231145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1314880, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9155543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1314880, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9155543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1314880, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9155543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1314904, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9201145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1314904, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9201145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1314904, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9201145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1314833, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9067085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1314833, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9067085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1314833, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9067085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1314851, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9099493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1314851, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9099493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1314851, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9099493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:49.024958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1314861, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9120927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1314861, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9120927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1314861, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9120927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1314894, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9181724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1314894, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9181724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1314894, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9181724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1314912, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9211144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1314912, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9211144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1314912, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9211144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1314855, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.910933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1314855, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.910933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1314855, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.910933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1314896, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.919969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:52.715720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1314896, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.919969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.950614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1314896, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.919969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.950767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1314884, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.917675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.950785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1314884, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.917675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.950880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1314884, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.917675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.950916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1314875, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9145403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.950953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1314875, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9145403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.950985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1314875, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9145403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.950998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1314869, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9138474, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.951009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1314869, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9138474, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.951020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1314869, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9138474, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.951041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1314895, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9185085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.951079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1314895, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9185085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:33:56.951115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1314895, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9185085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1314865, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9129314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1314865, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9129314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1314865, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9129314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1314909, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9209626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1314909, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9209626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1314909, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9209626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1315138, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9584165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1315138, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9584165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1315138, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9584165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1314973, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9361453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1314973, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9361453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1314973, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9361453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:00.768869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1314957, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9271145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1314957, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9271145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1315015, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9400456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1314957, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9271145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1315015, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9400456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1314935, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9248686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1315015, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9400456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1314935, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9248686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1315083, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9502919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1315083, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9502919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1314935, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9248686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1315018, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9458697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1315018, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9458697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:05.082634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1315083, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9502919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1315093, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9502919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1315093, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9502919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1315018, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9458697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1315133, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9570522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1315133, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9570522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1315093, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9502919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1315070, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.948829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1315070, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.948829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1315133, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9570522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1315000, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9385984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1315000, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9385984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1315070, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.948829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:08.719656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1314965, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9311147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1314965, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9311147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1315000, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9385984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1314994, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9371147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1314994, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9371147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1314965, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9311147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1314958, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9298918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1314958, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9298918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1314994, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9371147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1315009, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9395273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1315009, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9395273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1314958, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9298918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1315114, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.956115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:12.881531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1315114, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.956115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.786428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1315009, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9395273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.787477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1315104, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9536638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.787518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1315104, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9536638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.787534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1315114, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.956115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.787547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1314940, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9258292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.787558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1314940, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9258292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.787612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1315104, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9536638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.787626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1314948, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9271145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.787644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1314948, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9271145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.787656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1314940, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9258292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.787667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1315063, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9471147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.787678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1315063, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9471147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:34:16.787706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1314948, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9271145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:35:57.310398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1315098, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9521148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:35:57.310499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1315098, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9521148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:35:57.310510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1315063, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9471147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:35:57.310518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1315098, 'dev': 146, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770255350.9521148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 03:35:57.310525 | orchestrator | 2026-02-05 03:35:57.310532 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-05 03:35:57.310540 | orchestrator | Thursday 05 February 2026 03:34:18 +0000 (0:00:38.477) 0:00:53.061 ***** 2026-02-05 03:35:57.310546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:35:57.310581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:35:57.310588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 03:35:57.310594 | orchestrator | 2026-02-05 03:35:57.310604 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-05 03:35:57.310610 | orchestrator | Thursday 05 February 2026 03:34:19 +0000 (0:00:01.028) 0:00:54.090 ***** 2026-02-05 03:35:57.310616 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:35:57.310624 | orchestrator | 2026-02-05 03:35:57.310630 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-05 03:35:57.310635 | orchestrator | Thursday 05 February 2026 03:34:22 +0000 (0:00:02.451) 0:00:56.542 ***** 2026-02-05 03:35:57.310641 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:35:57.310647 | orchestrator | 2026-02-05 03:35:57.310653 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-05 03:35:57.310659 | orchestrator | Thursday 05 February 2026 03:34:24 +0000 (0:00:02.361) 0:00:58.903 ***** 2026-02-05 03:35:57.310664 | orchestrator | 2026-02-05 03:35:57.310670 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-05 03:35:57.310676 | orchestrator | Thursday 05 February 2026 03:34:24 +0000 (0:00:00.080) 0:00:58.984 ***** 2026-02-05 03:35:57.310682 | orchestrator | 2026-02-05 03:35:57.310687 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-05 03:35:57.310693 | orchestrator | Thursday 05 February 2026 03:34:24 +0000 (0:00:00.073) 0:00:59.057 ***** 2026-02-05 03:35:57.310699 | orchestrator | 2026-02-05 03:35:57.310705 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-05 03:35:57.310711 | orchestrator | Thursday 05 February 2026 03:34:25 +0000 (0:00:00.077) 0:00:59.135 ***** 2026-02-05 03:35:57.310717 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:35:57.310723 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:35:57.310728 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:35:57.310734 | orchestrator | 2026-02-05 03:35:57.310740 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-05 03:35:57.310746 | orchestrator | Thursday 05 February 2026 03:34:27 +0000 (0:00:02.346) 0:01:01.481 ***** 2026-02-05 03:35:57.310756 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:35:57.310763 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:35:57.310768 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-05 03:35:57.310775 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-05 03:35:57.310822 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-05 03:35:57.310828 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-05 03:35:57.310834 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:35:57.310841 | orchestrator | 2026-02-05 03:35:57.310847 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-05 03:35:57.310853 | orchestrator | Thursday 05 February 2026 03:35:18 +0000 (0:00:51.432) 0:01:52.914 ***** 2026-02-05 03:35:57.310859 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:35:57.310865 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:35:57.310871 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:35:57.310876 | orchestrator | 2026-02-05 03:35:57.310882 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-05 03:35:57.310888 | orchestrator | Thursday 05 February 2026 03:35:52 +0000 (0:00:33.231) 0:02:26.145 ***** 2026-02-05 03:35:57.310894 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:35:57.310900 | orchestrator | 2026-02-05 03:35:57.310907 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-05 03:35:57.310913 | orchestrator | Thursday 05 February 2026 03:35:54 +0000 (0:00:02.260) 0:02:28.405 ***** 2026-02-05 03:35:57.310920 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:35:57.310927 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:35:57.310934 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:35:57.310940 | orchestrator | 2026-02-05 03:35:57.310947 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-05 03:35:57.310954 | orchestrator | Thursday 05 February 2026 03:35:54 +0000 (0:00:00.311) 0:02:28.717 ***** 2026-02-05 03:35:57.310962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-05 03:35:57.310976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-05 03:35:57.997251 | orchestrator | 2026-02-05 03:35:57.997367 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-05 03:35:57.997383 | orchestrator | Thursday 05 February 2026 03:35:57 +0000 (0:00:02.686) 0:02:31.403 ***** 2026-02-05 03:35:57.997405 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:35:57.998162 | orchestrator | 2026-02-05 03:35:57.998191 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:35:57.998206 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 03:35:57.998225 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 03:35:57.998264 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 03:35:57.998283 | orchestrator | 2026-02-05 03:35:57.998301 | orchestrator | 2026-02-05 03:35:57.998318 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:35:57.998358 | orchestrator | Thursday 05 February 2026 03:35:57 +0000 (0:00:00.293) 0:02:31.697 ***** 2026-02-05 03:35:57.998369 | orchestrator | =============================================================================== 2026-02-05 03:35:57.998378 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.43s 2026-02-05 03:35:57.998388 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.48s 2026-02-05 03:35:57.998398 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 33.23s 2026-02-05 03:35:57.998407 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.69s 2026-02-05 03:35:57.998417 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.45s 2026-02-05 03:35:57.998427 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.36s 2026-02-05 03:35:57.998436 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.35s 2026-02-05 03:35:57.998446 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.26s 2026-02-05 03:35:57.998455 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.62s 2026-02-05 03:35:57.998465 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.42s 2026-02-05 03:35:57.998474 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.40s 2026-02-05 03:35:57.998484 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.23s 2026-02-05 03:35:57.998494 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.22s 2026-02-05 03:35:57.998508 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.22s 2026-02-05 03:35:57.998525 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.03s 2026-02-05 03:35:57.998539 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.80s 2026-02-05 03:35:57.998554 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.75s 2026-02-05 03:35:57.998570 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.70s 2026-02-05 03:35:57.998584 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.64s 2026-02-05 03:35:57.998601 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.64s 2026-02-05 03:35:58.372979 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-05 03:35:58.379396 | orchestrator | + set -e 2026-02-05 03:35:58.379591 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 03:35:58.379613 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 03:35:58.379627 | orchestrator | ++ INTERACTIVE=false 2026-02-05 03:35:58.379638 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 03:35:58.379649 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 03:35:58.379660 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 03:35:58.379670 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 03:35:58.379682 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 03:35:58.379705 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 03:35:58.379717 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 03:35:58.379729 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 03:35:58.379741 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 03:35:58.379752 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 03:35:58.379763 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 03:35:58.379774 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 03:35:58.379847 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 03:35:58.379858 | orchestrator | ++ export ARA=false 2026-02-05 03:35:58.379870 | orchestrator | ++ ARA=false 2026-02-05 03:35:58.379881 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 03:35:58.379892 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 03:35:58.379903 | orchestrator | ++ export TEMPEST=false 2026-02-05 03:35:58.379913 | orchestrator | ++ TEMPEST=false 2026-02-05 03:35:58.379924 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 03:35:58.379935 | orchestrator | ++ IS_ZUUL=true 2026-02-05 03:35:58.379946 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 03:35:58.379957 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 03:35:58.379967 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 03:35:58.380009 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 03:35:58.380020 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 03:35:58.380031 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 03:35:58.380041 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 03:35:58.380052 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 03:35:58.380063 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 03:35:58.380074 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 03:35:58.380518 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-05 03:35:58.437543 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 03:35:58.437625 | orchestrator | + osism apply clusterapi 2026-02-05 03:36:00.756144 | orchestrator | 2026-02-05 03:36:00 | INFO  | Task aa9c21aa-7971-41dd-afa5-5acd053b7313 (clusterapi) was prepared for execution. 2026-02-05 03:36:00.756233 | orchestrator | 2026-02-05 03:36:00 | INFO  | It takes a moment until task aa9c21aa-7971-41dd-afa5-5acd053b7313 (clusterapi) has been started and output is visible here. 2026-02-05 03:37:06.693948 | orchestrator | 2026-02-05 03:37:06.694096 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-05 03:37:06.694116 | orchestrator | 2026-02-05 03:37:06.694125 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-05 03:37:06.694135 | orchestrator | Thursday 05 February 2026 03:36:05 +0000 (0:00:00.208) 0:00:00.208 ***** 2026-02-05 03:37:06.694145 | orchestrator | included: cert_manager for testbed-manager 2026-02-05 03:37:06.694155 | orchestrator | 2026-02-05 03:37:06.694163 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-05 03:37:06.694174 | orchestrator | Thursday 05 February 2026 03:36:05 +0000 (0:00:00.249) 0:00:00.458 ***** 2026-02-05 03:37:06.694183 | orchestrator | changed: [testbed-manager] 2026-02-05 03:37:06.694194 | orchestrator | 2026-02-05 03:37:06.694203 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-05 03:37:06.694213 | orchestrator | Thursday 05 February 2026 03:36:11 +0000 (0:00:05.435) 0:00:05.894 ***** 2026-02-05 03:37:06.694222 | orchestrator | changed: [testbed-manager] 2026-02-05 03:37:06.694231 | orchestrator | 2026-02-05 03:37:06.694241 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-05 03:37:06.694250 | orchestrator | 2026-02-05 03:37:06.694277 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-05 03:37:06.694288 | orchestrator | Thursday 05 February 2026 03:36:44 +0000 (0:00:33.641) 0:00:39.535 ***** 2026-02-05 03:37:06.694294 | orchestrator | ok: [testbed-manager] 2026-02-05 03:37:06.694299 | orchestrator | 2026-02-05 03:37:06.694314 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-05 03:37:06.694324 | orchestrator | Thursday 05 February 2026 03:36:45 +0000 (0:00:01.128) 0:00:40.664 ***** 2026-02-05 03:37:06.694336 | orchestrator | ok: [testbed-manager] 2026-02-05 03:37:06.694351 | orchestrator | 2026-02-05 03:37:06.694360 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-05 03:37:06.694369 | orchestrator | Thursday 05 February 2026 03:36:45 +0000 (0:00:00.155) 0:00:40.819 ***** 2026-02-05 03:37:06.694378 | orchestrator | ok: [testbed-manager] 2026-02-05 03:37:06.694386 | orchestrator | 2026-02-05 03:37:06.694396 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-05 03:37:06.694406 | orchestrator | Thursday 05 February 2026 03:37:03 +0000 (0:00:17.896) 0:00:58.716 ***** 2026-02-05 03:37:06.694416 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:37:06.694426 | orchestrator | 2026-02-05 03:37:06.694436 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-05 03:37:06.694445 | orchestrator | Thursday 05 February 2026 03:37:04 +0000 (0:00:00.153) 0:00:58.869 ***** 2026-02-05 03:37:06.694450 | orchestrator | changed: [testbed-manager] 2026-02-05 03:37:06.694457 | orchestrator | 2026-02-05 03:37:06.694466 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:37:06.694481 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 03:37:06.694493 | orchestrator | 2026-02-05 03:37:06.694527 | orchestrator | 2026-02-05 03:37:06.694538 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:37:06.694548 | orchestrator | Thursday 05 February 2026 03:37:06 +0000 (0:00:02.289) 0:01:01.158 ***** 2026-02-05 03:37:06.694558 | orchestrator | =============================================================================== 2026-02-05 03:37:06.694567 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 33.64s 2026-02-05 03:37:06.694576 | orchestrator | Initialize the CAPI management cluster --------------------------------- 17.90s 2026-02-05 03:37:06.694586 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.44s 2026-02-05 03:37:06.694595 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.29s 2026-02-05 03:37:06.694603 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.13s 2026-02-05 03:37:06.694612 | orchestrator | Include cert_manager role ----------------------------------------------- 0.25s 2026-02-05 03:37:06.694621 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.16s 2026-02-05 03:37:06.694631 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.15s 2026-02-05 03:37:07.000147 | orchestrator | + osism apply magnum 2026-02-05 03:37:09.055952 | orchestrator | 2026-02-05 03:37:09 | INFO  | Task 3e2a2227-33b0-4e06-87db-535a4f32a267 (magnum) was prepared for execution. 2026-02-05 03:37:09.056081 | orchestrator | 2026-02-05 03:37:09 | INFO  | It takes a moment until task 3e2a2227-33b0-4e06-87db-535a4f32a267 (magnum) has been started and output is visible here. 2026-02-05 03:37:53.703412 | orchestrator | 2026-02-05 03:37:53.703544 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:37:53.703564 | orchestrator | 2026-02-05 03:37:53.703579 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:37:53.703594 | orchestrator | Thursday 05 February 2026 03:37:13 +0000 (0:00:00.305) 0:00:00.305 ***** 2026-02-05 03:37:53.703608 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:37:53.703624 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:37:53.703638 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:37:53.703652 | orchestrator | 2026-02-05 03:37:53.703665 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:37:53.703679 | orchestrator | Thursday 05 February 2026 03:37:13 +0000 (0:00:00.317) 0:00:00.622 ***** 2026-02-05 03:37:53.703693 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-05 03:37:53.703708 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-05 03:37:53.703721 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-05 03:37:53.703735 | orchestrator | 2026-02-05 03:37:53.703749 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-05 03:37:53.703851 | orchestrator | 2026-02-05 03:37:53.703865 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-05 03:37:53.703879 | orchestrator | Thursday 05 February 2026 03:37:14 +0000 (0:00:00.450) 0:00:01.073 ***** 2026-02-05 03:37:53.703893 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:37:53.703908 | orchestrator | 2026-02-05 03:37:53.703919 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-05 03:37:53.703932 | orchestrator | Thursday 05 February 2026 03:37:14 +0000 (0:00:00.585) 0:00:01.658 ***** 2026-02-05 03:37:53.703947 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-05 03:37:53.703960 | orchestrator | 2026-02-05 03:37:53.703973 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-05 03:37:53.703986 | orchestrator | Thursday 05 February 2026 03:37:18 +0000 (0:00:03.763) 0:00:05.422 ***** 2026-02-05 03:37:53.704001 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-05 03:37:53.704017 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-05 03:37:53.704062 | orchestrator | 2026-02-05 03:37:53.704094 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-05 03:37:53.704110 | orchestrator | Thursday 05 February 2026 03:37:25 +0000 (0:00:06.831) 0:00:12.253 ***** 2026-02-05 03:37:53.704125 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 03:37:53.704139 | orchestrator | 2026-02-05 03:37:53.704154 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-05 03:37:53.704170 | orchestrator | Thursday 05 February 2026 03:37:28 +0000 (0:00:03.606) 0:00:15.859 ***** 2026-02-05 03:37:53.704185 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 03:37:53.704200 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-05 03:37:53.704214 | orchestrator | 2026-02-05 03:37:53.704229 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-05 03:37:53.704244 | orchestrator | Thursday 05 February 2026 03:37:33 +0000 (0:00:04.245) 0:00:20.105 ***** 2026-02-05 03:37:53.704259 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 03:37:53.704274 | orchestrator | 2026-02-05 03:37:53.704288 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-05 03:37:53.704303 | orchestrator | Thursday 05 February 2026 03:37:36 +0000 (0:00:03.381) 0:00:23.487 ***** 2026-02-05 03:37:53.704318 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-05 03:37:53.704331 | orchestrator | 2026-02-05 03:37:53.704345 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-05 03:37:53.704359 | orchestrator | Thursday 05 February 2026 03:37:40 +0000 (0:00:04.032) 0:00:27.520 ***** 2026-02-05 03:37:53.704372 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:37:53.704385 | orchestrator | 2026-02-05 03:37:53.704399 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-05 03:37:53.704413 | orchestrator | Thursday 05 February 2026 03:37:44 +0000 (0:00:03.653) 0:00:31.173 ***** 2026-02-05 03:37:53.704427 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:37:53.704440 | orchestrator | 2026-02-05 03:37:53.704454 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-05 03:37:53.704468 | orchestrator | Thursday 05 February 2026 03:37:48 +0000 (0:00:04.105) 0:00:35.279 ***** 2026-02-05 03:37:53.704481 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:37:53.704495 | orchestrator | 2026-02-05 03:37:53.704509 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-05 03:37:53.704522 | orchestrator | Thursday 05 February 2026 03:37:52 +0000 (0:00:03.647) 0:00:38.926 ***** 2026-02-05 03:37:53.704560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:37:53.704579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:37:53.704610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:37:53.704625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:37:53.704705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:37:53.704728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:01.197412 | orchestrator | 2026-02-05 03:38:01.197521 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-05 03:38:01.197538 | orchestrator | Thursday 05 February 2026 03:37:53 +0000 (0:00:01.663) 0:00:40.589 ***** 2026-02-05 03:38:01.197549 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:38:01.197584 | orchestrator | 2026-02-05 03:38:01.197595 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-05 03:38:01.197605 | orchestrator | Thursday 05 February 2026 03:37:53 +0000 (0:00:00.136) 0:00:40.726 ***** 2026-02-05 03:38:01.197615 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:38:01.197625 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:38:01.197635 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:38:01.197645 | orchestrator | 2026-02-05 03:38:01.197655 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-05 03:38:01.197665 | orchestrator | Thursday 05 February 2026 03:37:54 +0000 (0:00:00.321) 0:00:41.048 ***** 2026-02-05 03:38:01.197675 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 03:38:01.197685 | orchestrator | 2026-02-05 03:38:01.197695 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-05 03:38:01.197705 | orchestrator | Thursday 05 February 2026 03:37:54 +0000 (0:00:00.850) 0:00:41.899 ***** 2026-02-05 03:38:01.197732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:01.197748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:01.197785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:01.197826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:01.197857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:01.197882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:01.197900 | orchestrator | 2026-02-05 03:38:01.197917 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-05 03:38:01.197930 | orchestrator | Thursday 05 February 2026 03:37:57 +0000 (0:00:02.421) 0:00:44.320 ***** 2026-02-05 03:38:01.197942 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:38:01.197954 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:38:01.197966 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:38:01.197977 | orchestrator | 2026-02-05 03:38:01.197989 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-05 03:38:01.198000 | orchestrator | Thursday 05 February 2026 03:37:57 +0000 (0:00:00.501) 0:00:44.821 ***** 2026-02-05 03:38:01.198052 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:38:01.198066 | orchestrator | 2026-02-05 03:38:01.198077 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-05 03:38:01.198090 | orchestrator | Thursday 05 February 2026 03:37:58 +0000 (0:00:00.628) 0:00:45.449 ***** 2026-02-05 03:38:01.198102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:01.198132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:02.242272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:02.242393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:02.242413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:02.242426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:02.242462 | orchestrator | 2026-02-05 03:38:02.242476 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-05 03:38:02.242488 | orchestrator | Thursday 05 February 2026 03:38:01 +0000 (0:00:02.643) 0:00:48.093 ***** 2026-02-05 03:38:02.242518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 03:38:02.242531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 03:38:02.242543 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:38:02.242562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 03:38:02.242574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 03:38:02.242586 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:38:02.242597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 03:38:02.242626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 03:38:05.854321 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:38:05.854461 | orchestrator | 2026-02-05 03:38:05.854480 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-05 03:38:05.854495 | orchestrator | Thursday 05 February 2026 03:38:02 +0000 (0:00:01.038) 0:00:49.131 ***** 2026-02-05 03:38:05.854542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 03:38:05.854563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 03:38:05.854580 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:38:05.854596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 03:38:05.854642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 03:38:05.854658 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:38:05.854698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 03:38:05.854724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 03:38:05.854742 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:38:05.854817 | orchestrator | 2026-02-05 03:38:05.854834 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-05 03:38:05.854844 | orchestrator | Thursday 05 February 2026 03:38:03 +0000 (0:00:00.984) 0:00:50.116 ***** 2026-02-05 03:38:05.854856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:05.854876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:05.854897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:11.354929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:11.355083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:11.355103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:11.355185 | orchestrator | 2026-02-05 03:38:11.355202 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-05 03:38:11.355216 | orchestrator | Thursday 05 February 2026 03:38:05 +0000 (0:00:02.632) 0:00:52.748 ***** 2026-02-05 03:38:11.355228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:11.355262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:11.355281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:11.355293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:11.355313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:11.355325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:11.355343 | orchestrator | 2026-02-05 03:38:11.355362 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-05 03:38:11.355380 | orchestrator | Thursday 05 February 2026 03:38:10 +0000 (0:00:04.871) 0:00:57.620 ***** 2026-02-05 03:38:11.355411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 03:38:13.205683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 03:38:13.205751 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:38:13.205789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 03:38:13.205810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 03:38:13.205814 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:38:13.205818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 03:38:13.205832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 03:38:13.205836 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:38:13.205840 | orchestrator | 2026-02-05 03:38:13.205845 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-05 03:38:13.205850 | orchestrator | Thursday 05 February 2026 03:38:11 +0000 (0:00:00.634) 0:00:58.255 ***** 2026-02-05 03:38:13.205858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:13.205866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:13.205870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 03:38:13.205875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:38:13.205884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:39:09.914172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 03:39:09.914329 | orchestrator | 2026-02-05 03:39:09.914353 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-05 03:39:09.914368 | orchestrator | Thursday 05 February 2026 03:38:13 +0000 (0:00:01.842) 0:01:00.098 ***** 2026-02-05 03:39:09.914381 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:39:09.914395 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:39:09.914409 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:39:09.914423 | orchestrator | 2026-02-05 03:39:09.914438 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-05 03:39:09.914452 | orchestrator | Thursday 05 February 2026 03:38:13 +0000 (0:00:00.512) 0:01:00.610 ***** 2026-02-05 03:39:09.914466 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:39:09.914479 | orchestrator | 2026-02-05 03:39:09.914492 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-05 03:39:09.914504 | orchestrator | Thursday 05 February 2026 03:38:16 +0000 (0:00:02.429) 0:01:03.040 ***** 2026-02-05 03:39:09.914513 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:39:09.914521 | orchestrator | 2026-02-05 03:39:09.914529 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-05 03:39:09.914537 | orchestrator | Thursday 05 February 2026 03:38:18 +0000 (0:00:02.538) 0:01:05.579 ***** 2026-02-05 03:39:09.914545 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:39:09.914553 | orchestrator | 2026-02-05 03:39:09.914560 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-05 03:39:09.914568 | orchestrator | Thursday 05 February 2026 03:38:35 +0000 (0:00:17.034) 0:01:22.613 ***** 2026-02-05 03:39:09.914576 | orchestrator | 2026-02-05 03:39:09.914584 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-05 03:39:09.914592 | orchestrator | Thursday 05 February 2026 03:38:35 +0000 (0:00:00.077) 0:01:22.691 ***** 2026-02-05 03:39:09.914600 | orchestrator | 2026-02-05 03:39:09.914608 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-05 03:39:09.914616 | orchestrator | Thursday 05 February 2026 03:38:35 +0000 (0:00:00.071) 0:01:22.763 ***** 2026-02-05 03:39:09.914623 | orchestrator | 2026-02-05 03:39:09.914631 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-05 03:39:09.914639 | orchestrator | Thursday 05 February 2026 03:38:35 +0000 (0:00:00.074) 0:01:22.838 ***** 2026-02-05 03:39:09.914647 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:39:09.914655 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:39:09.914663 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:39:09.914671 | orchestrator | 2026-02-05 03:39:09.914679 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-05 03:39:09.914686 | orchestrator | Thursday 05 February 2026 03:38:54 +0000 (0:00:18.578) 0:01:41.416 ***** 2026-02-05 03:39:09.914694 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:39:09.914702 | orchestrator | changed: [testbed-node-2] 2026-02-05 03:39:09.914710 | orchestrator | changed: [testbed-node-1] 2026-02-05 03:39:09.914718 | orchestrator | 2026-02-05 03:39:09.914726 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:39:09.914735 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 03:39:09.914744 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 03:39:09.914786 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 03:39:09.914795 | orchestrator | 2026-02-05 03:39:09.914803 | orchestrator | 2026-02-05 03:39:09.914811 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:39:09.914819 | orchestrator | Thursday 05 February 2026 03:39:09 +0000 (0:00:15.059) 0:01:56.476 ***** 2026-02-05 03:39:09.914827 | orchestrator | =============================================================================== 2026-02-05 03:39:09.914835 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.58s 2026-02-05 03:39:09.914843 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.03s 2026-02-05 03:39:09.914851 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.06s 2026-02-05 03:39:09.914859 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.83s 2026-02-05 03:39:09.914867 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.87s 2026-02-05 03:39:09.914875 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.25s 2026-02-05 03:39:09.914883 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.11s 2026-02-05 03:39:09.914908 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.03s 2026-02-05 03:39:09.914917 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.76s 2026-02-05 03:39:09.914925 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.65s 2026-02-05 03:39:09.914933 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.65s 2026-02-05 03:39:09.914948 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.61s 2026-02-05 03:39:09.914956 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.38s 2026-02-05 03:39:09.914964 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.64s 2026-02-05 03:39:09.914973 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.63s 2026-02-05 03:39:09.914980 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.54s 2026-02-05 03:39:09.914988 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.43s 2026-02-05 03:39:09.914996 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.42s 2026-02-05 03:39:09.915004 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.84s 2026-02-05 03:39:09.915012 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.66s 2026-02-05 03:39:10.633586 | orchestrator | ok: Runtime: 1:40:55.310635 2026-02-05 03:39:10.873936 | 2026-02-05 03:39:10.874075 | TASK [Deploy in a nutshell] 2026-02-05 03:39:11.407398 | orchestrator | skipping: Conditional result was False 2026-02-05 03:39:11.430290 | 2026-02-05 03:39:11.430454 | TASK [Bootstrap services] 2026-02-05 03:39:12.174244 | orchestrator | 2026-02-05 03:39:12.174395 | orchestrator | # BOOTSTRAP 2026-02-05 03:39:12.174408 | orchestrator | 2026-02-05 03:39:12.174416 | orchestrator | + set -e 2026-02-05 03:39:12.174422 | orchestrator | + echo 2026-02-05 03:39:12.174430 | orchestrator | + echo '# BOOTSTRAP' 2026-02-05 03:39:12.174440 | orchestrator | + echo 2026-02-05 03:39:12.174467 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-05 03:39:12.184280 | orchestrator | + set -e 2026-02-05 03:39:12.184349 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-05 03:39:14.343245 | orchestrator | 2026-02-05 03:39:14 | INFO  | It takes a moment until task d493034a-e566-44f0-a7c8-fc981dc69602 (flavor-manager) has been started and output is visible here. 2026-02-05 03:39:22.443453 | orchestrator | 2026-02-05 03:39:17 | INFO  | Flavor SCS-1L-1 created 2026-02-05 03:39:22.443620 | orchestrator | 2026-02-05 03:39:17 | INFO  | Flavor SCS-1L-1-5 created 2026-02-05 03:39:22.443647 | orchestrator | 2026-02-05 03:39:18 | INFO  | Flavor SCS-1V-2 created 2026-02-05 03:39:22.443659 | orchestrator | 2026-02-05 03:39:18 | INFO  | Flavor SCS-1V-2-5 created 2026-02-05 03:39:22.443672 | orchestrator | 2026-02-05 03:39:18 | INFO  | Flavor SCS-1V-4 created 2026-02-05 03:39:22.444676 | orchestrator | 2026-02-05 03:39:18 | INFO  | Flavor SCS-1V-4-10 created 2026-02-05 03:39:22.444727 | orchestrator | 2026-02-05 03:39:18 | INFO  | Flavor SCS-1V-8 created 2026-02-05 03:39:22.444800 | orchestrator | 2026-02-05 03:39:19 | INFO  | Flavor SCS-1V-8-20 created 2026-02-05 03:39:22.444845 | orchestrator | 2026-02-05 03:39:19 | INFO  | Flavor SCS-2V-4 created 2026-02-05 03:39:22.444864 | orchestrator | 2026-02-05 03:39:19 | INFO  | Flavor SCS-2V-4-10 created 2026-02-05 03:39:22.444876 | orchestrator | 2026-02-05 03:39:19 | INFO  | Flavor SCS-2V-8 created 2026-02-05 03:39:22.444887 | orchestrator | 2026-02-05 03:39:19 | INFO  | Flavor SCS-2V-8-20 created 2026-02-05 03:39:22.444905 | orchestrator | 2026-02-05 03:39:19 | INFO  | Flavor SCS-2V-16 created 2026-02-05 03:39:22.444925 | orchestrator | 2026-02-05 03:39:19 | INFO  | Flavor SCS-2V-16-50 created 2026-02-05 03:39:22.444943 | orchestrator | 2026-02-05 03:39:20 | INFO  | Flavor SCS-4V-8 created 2026-02-05 03:39:22.444962 | orchestrator | 2026-02-05 03:39:20 | INFO  | Flavor SCS-4V-8-20 created 2026-02-05 03:39:22.444980 | orchestrator | 2026-02-05 03:39:20 | INFO  | Flavor SCS-4V-16 created 2026-02-05 03:39:22.444998 | orchestrator | 2026-02-05 03:39:20 | INFO  | Flavor SCS-4V-16-50 created 2026-02-05 03:39:22.445016 | orchestrator | 2026-02-05 03:39:20 | INFO  | Flavor SCS-4V-32 created 2026-02-05 03:39:22.445037 | orchestrator | 2026-02-05 03:39:20 | INFO  | Flavor SCS-4V-32-100 created 2026-02-05 03:39:22.445055 | orchestrator | 2026-02-05 03:39:20 | INFO  | Flavor SCS-8V-16 created 2026-02-05 03:39:22.445074 | orchestrator | 2026-02-05 03:39:21 | INFO  | Flavor SCS-8V-16-50 created 2026-02-05 03:39:22.445094 | orchestrator | 2026-02-05 03:39:21 | INFO  | Flavor SCS-8V-32 created 2026-02-05 03:39:22.445113 | orchestrator | 2026-02-05 03:39:21 | INFO  | Flavor SCS-8V-32-100 created 2026-02-05 03:39:22.445133 | orchestrator | 2026-02-05 03:39:21 | INFO  | Flavor SCS-16V-32 created 2026-02-05 03:39:22.445149 | orchestrator | 2026-02-05 03:39:21 | INFO  | Flavor SCS-16V-32-100 created 2026-02-05 03:39:22.445160 | orchestrator | 2026-02-05 03:39:21 | INFO  | Flavor SCS-2V-4-20s created 2026-02-05 03:39:22.445171 | orchestrator | 2026-02-05 03:39:22 | INFO  | Flavor SCS-4V-8-50s created 2026-02-05 03:39:22.445182 | orchestrator | 2026-02-05 03:39:22 | INFO  | Flavor SCS-8V-32-100s created 2026-02-05 03:39:24.775117 | orchestrator | 2026-02-05 03:39:24 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-05 03:39:34.909850 | orchestrator | 2026-02-05 03:39:34 | INFO  | Task b510ab48-c2da-4785-9dac-890f01b3d438 (bootstrap-basic) was prepared for execution. 2026-02-05 03:39:34.909958 | orchestrator | 2026-02-05 03:39:34 | INFO  | It takes a moment until task b510ab48-c2da-4785-9dac-890f01b3d438 (bootstrap-basic) has been started and output is visible here. 2026-02-05 03:40:18.681920 | orchestrator | 2026-02-05 03:40:18.682168 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-05 03:40:18.682201 | orchestrator | 2026-02-05 03:40:18.682215 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 03:40:18.682227 | orchestrator | Thursday 05 February 2026 03:39:39 +0000 (0:00:00.084) 0:00:00.084 ***** 2026-02-05 03:40:18.682238 | orchestrator | ok: [localhost] 2026-02-05 03:40:18.682251 | orchestrator | 2026-02-05 03:40:18.682262 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-05 03:40:18.682273 | orchestrator | Thursday 05 February 2026 03:39:41 +0000 (0:00:01.855) 0:00:01.939 ***** 2026-02-05 03:40:18.682284 | orchestrator | ok: [localhost] 2026-02-05 03:40:18.682295 | orchestrator | 2026-02-05 03:40:18.682306 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-05 03:40:18.682317 | orchestrator | Thursday 05 February 2026 03:39:47 +0000 (0:00:06.774) 0:00:08.714 ***** 2026-02-05 03:40:18.682328 | orchestrator | changed: [localhost] 2026-02-05 03:40:18.682339 | orchestrator | 2026-02-05 03:40:18.682350 | orchestrator | TASK [Create public network] *************************************************** 2026-02-05 03:40:18.682362 | orchestrator | Thursday 05 February 2026 03:39:54 +0000 (0:00:06.317) 0:00:15.032 ***** 2026-02-05 03:40:18.682373 | orchestrator | changed: [localhost] 2026-02-05 03:40:18.682383 | orchestrator | 2026-02-05 03:40:18.682394 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-05 03:40:18.682405 | orchestrator | Thursday 05 February 2026 03:39:59 +0000 (0:00:05.665) 0:00:20.697 ***** 2026-02-05 03:40:18.682422 | orchestrator | changed: [localhost] 2026-02-05 03:40:18.682433 | orchestrator | 2026-02-05 03:40:18.682444 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-05 03:40:18.682455 | orchestrator | Thursday 05 February 2026 03:40:06 +0000 (0:00:06.572) 0:00:27.269 ***** 2026-02-05 03:40:18.682466 | orchestrator | changed: [localhost] 2026-02-05 03:40:18.682476 | orchestrator | 2026-02-05 03:40:18.682487 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-05 03:40:18.682498 | orchestrator | Thursday 05 February 2026 03:40:10 +0000 (0:00:04.404) 0:00:31.674 ***** 2026-02-05 03:40:18.682509 | orchestrator | changed: [localhost] 2026-02-05 03:40:18.682519 | orchestrator | 2026-02-05 03:40:18.682530 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-05 03:40:18.682554 | orchestrator | Thursday 05 February 2026 03:40:14 +0000 (0:00:03.905) 0:00:35.579 ***** 2026-02-05 03:40:18.682566 | orchestrator | ok: [localhost] 2026-02-05 03:40:18.682577 | orchestrator | 2026-02-05 03:40:18.682588 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:40:18.682599 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 03:40:18.682611 | orchestrator | 2026-02-05 03:40:18.682622 | orchestrator | 2026-02-05 03:40:18.682632 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:40:18.682643 | orchestrator | Thursday 05 February 2026 03:40:18 +0000 (0:00:03.641) 0:00:39.220 ***** 2026-02-05 03:40:18.682654 | orchestrator | =============================================================================== 2026-02-05 03:40:18.682664 | orchestrator | Get volume type LUKS ---------------------------------------------------- 6.77s 2026-02-05 03:40:18.682675 | orchestrator | Set public network to default ------------------------------------------- 6.57s 2026-02-05 03:40:18.682686 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.32s 2026-02-05 03:40:18.682696 | orchestrator | Create public network --------------------------------------------------- 5.67s 2026-02-05 03:40:18.682739 | orchestrator | Create public subnet ---------------------------------------------------- 4.40s 2026-02-05 03:40:18.682776 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.91s 2026-02-05 03:40:18.682788 | orchestrator | Create manager role ----------------------------------------------------- 3.64s 2026-02-05 03:40:18.682799 | orchestrator | Gathering Facts --------------------------------------------------------- 1.86s 2026-02-05 03:40:21.239339 | orchestrator | 2026-02-05 03:40:21 | INFO  | It takes a moment until task 81488d0c-41da-4bcf-9e3a-3e24a305e814 (image-manager) has been started and output is visible here. 2026-02-05 03:41:05.004364 | orchestrator | 2026-02-05 03:40:23 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-05 03:41:05.004454 | orchestrator | 2026-02-05 03:40:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-05 03:41:05.004461 | orchestrator | 2026-02-05 03:40:24 | INFO  | Importing image Cirros 0.6.2 2026-02-05 03:41:05.004467 | orchestrator | 2026-02-05 03:40:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-05 03:41:05.004472 | orchestrator | 2026-02-05 03:40:26 | INFO  | Waiting for image to leave queued state... 2026-02-05 03:41:05.004477 | orchestrator | 2026-02-05 03:40:28 | INFO  | Waiting for import to complete... 2026-02-05 03:41:05.004481 | orchestrator | 2026-02-05 03:40:38 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-05 03:41:05.004486 | orchestrator | 2026-02-05 03:40:38 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-05 03:41:05.004490 | orchestrator | 2026-02-05 03:40:38 | INFO  | Setting internal_version = 0.6.2 2026-02-05 03:41:05.004494 | orchestrator | 2026-02-05 03:40:38 | INFO  | Setting image_original_user = cirros 2026-02-05 03:41:05.004499 | orchestrator | 2026-02-05 03:40:38 | INFO  | Adding tag os:cirros 2026-02-05 03:41:05.004502 | orchestrator | 2026-02-05 03:40:39 | INFO  | Setting property architecture: x86_64 2026-02-05 03:41:05.004506 | orchestrator | 2026-02-05 03:40:39 | INFO  | Setting property hw_disk_bus: scsi 2026-02-05 03:41:05.004510 | orchestrator | 2026-02-05 03:40:39 | INFO  | Setting property hw_rng_model: virtio 2026-02-05 03:41:05.004515 | orchestrator | 2026-02-05 03:40:40 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-05 03:41:05.004519 | orchestrator | 2026-02-05 03:40:40 | INFO  | Setting property hw_watchdog_action: reset 2026-02-05 03:41:05.004522 | orchestrator | 2026-02-05 03:40:40 | INFO  | Setting property hypervisor_type: qemu 2026-02-05 03:41:05.004526 | orchestrator | 2026-02-05 03:40:40 | INFO  | Setting property os_distro: cirros 2026-02-05 03:41:05.004530 | orchestrator | 2026-02-05 03:40:41 | INFO  | Setting property os_purpose: minimal 2026-02-05 03:41:05.004534 | orchestrator | 2026-02-05 03:40:41 | INFO  | Setting property replace_frequency: never 2026-02-05 03:41:05.004538 | orchestrator | 2026-02-05 03:40:41 | INFO  | Setting property uuid_validity: none 2026-02-05 03:41:05.004541 | orchestrator | 2026-02-05 03:40:41 | INFO  | Setting property provided_until: none 2026-02-05 03:41:05.004545 | orchestrator | 2026-02-05 03:40:42 | INFO  | Setting property image_description: Cirros 2026-02-05 03:41:05.004549 | orchestrator | 2026-02-05 03:40:42 | INFO  | Setting property image_name: Cirros 2026-02-05 03:41:05.004553 | orchestrator | 2026-02-05 03:40:42 | INFO  | Setting property internal_version: 0.6.2 2026-02-05 03:41:05.004557 | orchestrator | 2026-02-05 03:40:43 | INFO  | Setting property image_original_user: cirros 2026-02-05 03:41:05.004577 | orchestrator | 2026-02-05 03:40:43 | INFO  | Setting property os_version: 0.6.2 2026-02-05 03:41:05.004587 | orchestrator | 2026-02-05 03:40:43 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-05 03:41:05.004592 | orchestrator | 2026-02-05 03:40:43 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-05 03:41:05.004596 | orchestrator | 2026-02-05 03:40:44 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-05 03:41:05.004600 | orchestrator | 2026-02-05 03:40:44 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-05 03:41:05.004603 | orchestrator | 2026-02-05 03:40:44 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-05 03:41:05.004607 | orchestrator | 2026-02-05 03:40:44 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-05 03:41:05.004614 | orchestrator | 2026-02-05 03:40:45 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-05 03:41:05.004618 | orchestrator | 2026-02-05 03:40:45 | INFO  | Importing image Cirros 0.6.3 2026-02-05 03:41:05.004622 | orchestrator | 2026-02-05 03:40:45 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-05 03:41:05.004625 | orchestrator | 2026-02-05 03:40:46 | INFO  | Waiting for image to leave queued state... 2026-02-05 03:41:05.004629 | orchestrator | 2026-02-05 03:40:48 | INFO  | Waiting for import to complete... 2026-02-05 03:41:05.004643 | orchestrator | 2026-02-05 03:40:58 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-05 03:41:05.004648 | orchestrator | 2026-02-05 03:40:59 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-05 03:41:05.004651 | orchestrator | 2026-02-05 03:40:59 | INFO  | Setting internal_version = 0.6.3 2026-02-05 03:41:05.004655 | orchestrator | 2026-02-05 03:40:59 | INFO  | Setting image_original_user = cirros 2026-02-05 03:41:05.004659 | orchestrator | 2026-02-05 03:40:59 | INFO  | Adding tag os:cirros 2026-02-05 03:41:05.004663 | orchestrator | 2026-02-05 03:40:59 | INFO  | Setting property architecture: x86_64 2026-02-05 03:41:05.004666 | orchestrator | 2026-02-05 03:40:59 | INFO  | Setting property hw_disk_bus: scsi 2026-02-05 03:41:05.004670 | orchestrator | 2026-02-05 03:40:59 | INFO  | Setting property hw_rng_model: virtio 2026-02-05 03:41:05.004674 | orchestrator | 2026-02-05 03:41:00 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-05 03:41:05.004678 | orchestrator | 2026-02-05 03:41:00 | INFO  | Setting property hw_watchdog_action: reset 2026-02-05 03:41:05.004682 | orchestrator | 2026-02-05 03:41:00 | INFO  | Setting property hypervisor_type: qemu 2026-02-05 03:41:05.004686 | orchestrator | 2026-02-05 03:41:00 | INFO  | Setting property os_distro: cirros 2026-02-05 03:41:05.004689 | orchestrator | 2026-02-05 03:41:01 | INFO  | Setting property os_purpose: minimal 2026-02-05 03:41:05.004693 | orchestrator | 2026-02-05 03:41:01 | INFO  | Setting property replace_frequency: never 2026-02-05 03:41:05.004697 | orchestrator | 2026-02-05 03:41:01 | INFO  | Setting property uuid_validity: none 2026-02-05 03:41:05.004701 | orchestrator | 2026-02-05 03:41:01 | INFO  | Setting property provided_until: none 2026-02-05 03:41:05.004705 | orchestrator | 2026-02-05 03:41:02 | INFO  | Setting property image_description: Cirros 2026-02-05 03:41:05.004709 | orchestrator | 2026-02-05 03:41:02 | INFO  | Setting property image_name: Cirros 2026-02-05 03:41:05.004724 | orchestrator | 2026-02-05 03:41:02 | INFO  | Setting property internal_version: 0.6.3 2026-02-05 03:41:05.004738 | orchestrator | 2026-02-05 03:41:02 | INFO  | Setting property image_original_user: cirros 2026-02-05 03:41:05.004742 | orchestrator | 2026-02-05 03:41:03 | INFO  | Setting property os_version: 0.6.3 2026-02-05 03:41:05.004785 | orchestrator | 2026-02-05 03:41:03 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-05 03:41:05.004789 | orchestrator | 2026-02-05 03:41:03 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-05 03:41:05.004793 | orchestrator | 2026-02-05 03:41:04 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-05 03:41:05.004797 | orchestrator | 2026-02-05 03:41:04 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-05 03:41:05.004801 | orchestrator | 2026-02-05 03:41:04 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-05 03:41:05.334626 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-05 03:41:07.493316 | orchestrator | 2026-02-05 03:41:07 | INFO  | date: 2026-02-04 2026-02-05 03:41:07.493441 | orchestrator | 2026-02-05 03:41:07 | INFO  | image: octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-05 03:41:07.493493 | orchestrator | 2026-02-05 03:41:07 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-05 03:41:07.493517 | orchestrator | 2026-02-05 03:41:07 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2.CHECKSUM 2026-02-05 03:41:07.671723 | orchestrator | 2026-02-05 03:41:07 | INFO  | checksum: fa81774e60e440b52eb763bc24f9302dc0d7fa56080593c2ba4182f5e23fdc54 2026-02-05 03:41:07.746529 | orchestrator | 2026-02-05 03:41:07 | INFO  | It takes a moment until task 4133f24c-f8a0-4438-8a29-d0cabde0ed29 (image-manager) has been started and output is visible here. 2026-02-05 03:42:51.018392 | orchestrator | 2026-02-05 03:41:10 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-04' 2026-02-05 03:42:51.018532 | orchestrator | 2026-02-05 03:41:10 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2: 200 2026-02-05 03:42:51.018560 | orchestrator | 2026-02-05 03:41:10 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-04 2026-02-05 03:42:51.018581 | orchestrator | 2026-02-05 03:41:10 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-05 03:42:51.018603 | orchestrator | 2026-02-05 03:41:11 | INFO  | Waiting for image to leave queued state... 2026-02-05 03:42:51.018621 | orchestrator | 2026-02-05 03:41:13 | INFO  | Waiting for import to complete... 2026-02-05 03:42:51.018639 | orchestrator | 2026-02-05 03:41:24 | INFO  | Waiting for import to complete... 2026-02-05 03:42:51.018657 | orchestrator | 2026-02-05 03:41:34 | INFO  | Waiting for import to complete... 2026-02-05 03:42:51.018674 | orchestrator | 2026-02-05 03:41:44 | INFO  | Waiting for import to complete... 2026-02-05 03:42:51.018698 | orchestrator | 2026-02-05 03:41:54 | INFO  | Waiting for import to complete... 2026-02-05 03:42:51.018717 | orchestrator | 2026-02-05 03:42:04 | INFO  | Waiting for import to complete... 2026-02-05 03:42:51.018736 | orchestrator | 2026-02-05 03:42:14 | INFO  | Waiting for import to complete... 2026-02-05 03:42:51.018785 | orchestrator | 2026-02-05 03:42:24 | INFO  | Waiting for import to complete... 2026-02-05 03:42:51.018803 | orchestrator | 2026-02-05 03:42:34 | INFO  | Waiting for import to complete... 2026-02-05 03:42:51.018859 | orchestrator | 2026-02-05 03:42:45 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-04' successfully completed, reloading images 2026-02-05 03:42:51.018881 | orchestrator | 2026-02-05 03:42:45 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-04' 2026-02-05 03:42:51.018901 | orchestrator | 2026-02-05 03:42:45 | INFO  | Setting internal_version = 2026-02-04 2026-02-05 03:42:51.018922 | orchestrator | 2026-02-05 03:42:45 | INFO  | Setting image_original_user = ubuntu 2026-02-05 03:42:51.018941 | orchestrator | 2026-02-05 03:42:45 | INFO  | Adding tag amphora 2026-02-05 03:42:51.018959 | orchestrator | 2026-02-05 03:42:45 | INFO  | Adding tag os:ubuntu 2026-02-05 03:42:51.018977 | orchestrator | 2026-02-05 03:42:46 | INFO  | Setting property architecture: x86_64 2026-02-05 03:42:51.018995 | orchestrator | 2026-02-05 03:42:46 | INFO  | Setting property hw_disk_bus: scsi 2026-02-05 03:42:51.019015 | orchestrator | 2026-02-05 03:42:46 | INFO  | Setting property hw_rng_model: virtio 2026-02-05 03:42:51.019034 | orchestrator | 2026-02-05 03:42:46 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-05 03:42:51.019052 | orchestrator | 2026-02-05 03:42:47 | INFO  | Setting property hw_watchdog_action: reset 2026-02-05 03:42:51.019073 | orchestrator | 2026-02-05 03:42:47 | INFO  | Setting property hypervisor_type: qemu 2026-02-05 03:42:51.019094 | orchestrator | 2026-02-05 03:42:47 | INFO  | Setting property os_distro: ubuntu 2026-02-05 03:42:51.019108 | orchestrator | 2026-02-05 03:42:47 | INFO  | Setting property replace_frequency: quarterly 2026-02-05 03:42:51.019121 | orchestrator | 2026-02-05 03:42:48 | INFO  | Setting property uuid_validity: last-1 2026-02-05 03:42:51.019152 | orchestrator | 2026-02-05 03:42:48 | INFO  | Setting property provided_until: none 2026-02-05 03:42:51.019166 | orchestrator | 2026-02-05 03:42:48 | INFO  | Setting property os_purpose: network 2026-02-05 03:42:51.019179 | orchestrator | 2026-02-05 03:42:48 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-05 03:42:51.019193 | orchestrator | 2026-02-05 03:42:49 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-05 03:42:51.019203 | orchestrator | 2026-02-05 03:42:49 | INFO  | Setting property internal_version: 2026-02-04 2026-02-05 03:42:51.019214 | orchestrator | 2026-02-05 03:42:49 | INFO  | Setting property image_original_user: ubuntu 2026-02-05 03:42:51.019225 | orchestrator | 2026-02-05 03:42:49 | INFO  | Setting property os_version: 2026-02-04 2026-02-05 03:42:51.019236 | orchestrator | 2026-02-05 03:42:50 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-05 03:42:51.019247 | orchestrator | 2026-02-05 03:42:50 | INFO  | Setting property image_build_date: 2026-02-04 2026-02-05 03:42:51.019285 | orchestrator | 2026-02-05 03:42:50 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-04' 2026-02-05 03:42:51.019305 | orchestrator | 2026-02-05 03:42:50 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-04' 2026-02-05 03:42:51.019323 | orchestrator | 2026-02-05 03:42:50 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-05 03:42:51.019339 | orchestrator | 2026-02-05 03:42:50 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-05 03:42:51.019357 | orchestrator | 2026-02-05 03:42:50 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-05 03:42:51.019375 | orchestrator | 2026-02-05 03:42:50 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-05 03:42:51.624701 | orchestrator | ok: Runtime: 0:03:39.593271 2026-02-05 03:42:51.644522 | 2026-02-05 03:42:51.644662 | TASK [Run checks] 2026-02-05 03:42:52.394244 | orchestrator | + set -e 2026-02-05 03:42:52.394620 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 03:42:52.394666 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 03:42:52.394699 | orchestrator | ++ INTERACTIVE=false 2026-02-05 03:42:52.394722 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 03:42:52.394744 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 03:42:52.394799 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-05 03:42:52.396441 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-05 03:42:52.404248 | orchestrator | 2026-02-05 03:42:52.404337 | orchestrator | # CHECK 2026-02-05 03:42:52.404351 | orchestrator | 2026-02-05 03:42:52.404363 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 03:42:52.404379 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 03:42:52.404390 | orchestrator | + echo 2026-02-05 03:42:52.404400 | orchestrator | + echo '# CHECK' 2026-02-05 03:42:52.404409 | orchestrator | + echo 2026-02-05 03:42:52.404425 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-05 03:42:52.405081 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-05 03:42:52.469301 | orchestrator | 2026-02-05 03:42:52.469374 | orchestrator | ## Containers @ testbed-manager 2026-02-05 03:42:52.469386 | orchestrator | 2026-02-05 03:42:52.469396 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-05 03:42:52.469404 | orchestrator | + echo 2026-02-05 03:42:52.469412 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-05 03:42:52.469420 | orchestrator | + echo 2026-02-05 03:42:52.469428 | orchestrator | + osism container testbed-manager ps 2026-02-05 03:42:54.490821 | orchestrator | 2026-02-05 03:42:54 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-05 03:42:54.880268 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-05 03:42:54.880399 | orchestrator | fcc03bbc62cf registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-05 03:42:54.880434 | orchestrator | deb8bf573f6d registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_alertmanager 2026-02-05 03:42:54.880451 | orchestrator | 7980fedc5b95 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-05 03:42:54.880469 | orchestrator | d75b56867405 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-05 03:42:54.880484 | orchestrator | 826b960b9d77 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_server 2026-02-05 03:42:54.880508 | orchestrator | bafacc80f0d9 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 59 minutes ago Up 58 minutes cephclient 2026-02-05 03:42:54.880526 | orchestrator | a83f3708fc17 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-05 03:42:54.880544 | orchestrator | ec8c0a515e56 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-05 03:42:54.880595 | orchestrator | eb632850bf5c registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-05 03:42:54.880615 | orchestrator | 7e0b4e33a69e registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-05 03:42:54.880633 | orchestrator | f03a3fbd4139 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-05 03:42:54.880651 | orchestrator | debdb9493f4c registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-05 03:42:54.880669 | orchestrator | 9277b5069001 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-05 03:42:54.880687 | orchestrator | 41d1719692fd registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-05 03:42:54.880731 | orchestrator | 1926993d688a registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-05 03:42:54.880800 | orchestrator | 47e2f2b30755 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-05 03:42:54.880819 | orchestrator | f476f2f40359 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-05 03:42:54.880834 | orchestrator | d74dfe452084 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-05 03:42:54.880849 | orchestrator | 8df6fb9c77b7 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-05 03:42:54.880865 | orchestrator | d0db681c7755 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-05 03:42:54.880882 | orchestrator | cf2baff9cec7 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-05 03:42:54.880898 | orchestrator | 03270b1388b8 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-05 03:42:54.880927 | orchestrator | 5717e4df0a4b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-05 03:42:54.880942 | orchestrator | d17d4c54fc47 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-05 03:42:54.880958 | orchestrator | a9e9ba69bd55 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-05 03:42:54.880973 | orchestrator | a7c04ac3398a registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-05 03:42:54.880989 | orchestrator | 2982f2f3af39 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-05 03:42:54.881005 | orchestrator | fd56c293a9f1 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-05 03:42:54.881019 | orchestrator | 08df2d3f6934 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-05 03:42:54.881042 | orchestrator | 6b9254aa6e40 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-05 03:42:55.224978 | orchestrator | 2026-02-05 03:42:55.225076 | orchestrator | ## Images @ testbed-manager 2026-02-05 03:42:55.225088 | orchestrator | 2026-02-05 03:42:55.225096 | orchestrator | + echo 2026-02-05 03:42:55.225105 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-05 03:42:55.225114 | orchestrator | + echo 2026-02-05 03:42:55.225125 | orchestrator | + osism container testbed-manager images 2026-02-05 03:42:57.600004 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-05 03:42:57.600115 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 d38f323e7dd5 24 hours ago 238MB 2026-02-05 03:42:57.600125 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 8 days ago 41.4MB 2026-02-05 03:42:57.600132 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-05 03:42:57.600138 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-05 03:42:57.600144 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-05 03:42:57.600150 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-05 03:42:57.600156 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-05 03:42:57.600164 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-05 03:42:57.600170 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-05 03:42:57.600193 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-05 03:42:57.600199 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-05 03:42:57.600205 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-05 03:42:57.600211 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-05 03:42:57.600217 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-05 03:42:57.600222 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-05 03:42:57.600229 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-05 03:42:57.600234 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-05 03:42:57.600243 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-05 03:42:57.600253 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 2 months ago 334MB 2026-02-05 03:42:57.600263 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 months ago 742MB 2026-02-05 03:42:57.600271 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-05 03:42:57.600284 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-02-05 03:42:57.600296 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-05 03:42:57.600305 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-05 03:42:57.600314 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-05 03:42:57.911054 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-05 03:42:57.911160 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-05 03:42:57.970129 | orchestrator | 2026-02-05 03:42:57.970238 | orchestrator | ## Containers @ testbed-node-0 2026-02-05 03:42:57.970254 | orchestrator | 2026-02-05 03:42:57.970266 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-05 03:42:57.970277 | orchestrator | + echo 2026-02-05 03:42:57.970289 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-05 03:42:57.970301 | orchestrator | + echo 2026-02-05 03:42:57.970312 | orchestrator | + osism container testbed-node-0 ps 2026-02-05 03:43:00.417982 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-05 03:43:00.418123 | orchestrator | 26b674233bf9 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_conductor 2026-02-05 03:43:00.418172 | orchestrator | a4d2929292df registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-05 03:43:00.418194 | orchestrator | d99279f61363 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-02-05 03:43:00.418209 | orchestrator | 7cb91f0ced88 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-02-05 03:43:00.418251 | orchestrator | 60d9e8158e72 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-05 03:43:00.418268 | orchestrator | 131be7d932c2 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-05 03:43:00.418290 | orchestrator | ad0622161bd4 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-05 03:43:00.418305 | orchestrator | a5f6584486d0 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_node_exporter 2026-02-05 03:43:00.418315 | orchestrator | 391d6756f7cf registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-05 03:43:00.418325 | orchestrator | 994aeae8a787 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-05 03:43:00.418334 | orchestrator | 2f293a3abd7d registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-05 03:43:00.418344 | orchestrator | e7ef91a7da3a registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-05 03:43:00.418352 | orchestrator | 59f6508e69f3 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-05 03:43:00.418361 | orchestrator | 47ef742255ca registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-05 03:43:00.418370 | orchestrator | 79c1d6d7cd72 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-05 03:43:00.418439 | orchestrator | 8c3bcfbe905b registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-05 03:43:00.418450 | orchestrator | c6c5a07c842c registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-05 03:43:00.418459 | orchestrator | c016c4112fad registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-05 03:43:00.418469 | orchestrator | 89428f2548dc registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-05 03:43:00.418503 | orchestrator | ff6ad5e0b7c1 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-05 03:43:00.418520 | orchestrator | f7b6c4c88cb9 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-05 03:43:00.418536 | orchestrator | fa4b4a2c95c2 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-05 03:43:00.418562 | orchestrator | cf2bfdeb8edf registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-05 03:43:00.418579 | orchestrator | 647ddbc79fe4 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-05 03:43:00.418596 | orchestrator | 291c30fd7b2e registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-05 03:43:00.418618 | orchestrator | 9fba198f9afc registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-05 03:43:00.418634 | orchestrator | 5ef8689703c2 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-05 03:43:00.418648 | orchestrator | 6f7ec0fbd36f registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-05 03:43:00.418657 | orchestrator | df7297d83f5a registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-05 03:43:00.418666 | orchestrator | fe656889df63 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-05 03:43:00.418675 | orchestrator | 41d872076a1a registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-05 03:43:00.418684 | orchestrator | 8e3e79983318 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-05 03:43:00.418693 | orchestrator | f796f7fc7ed6 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-05 03:43:00.418702 | orchestrator | 7ab98001befe registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-05 03:43:00.418711 | orchestrator | 2fe1f5d2607c registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-02-05 03:43:00.418719 | orchestrator | 429cfdcbb980 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-05 03:43:00.418728 | orchestrator | f6ff7d7cf614 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-05 03:43:00.418737 | orchestrator | df7dcacf257d registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-02-05 03:43:00.418746 | orchestrator | 77a437a6c3f9 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-05 03:43:00.418828 | orchestrator | fcc5cedc8e38 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-05 03:43:00.418849 | orchestrator | 9411158dc3c9 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-05 03:43:00.418859 | orchestrator | bb4fb95cd015 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-05 03:43:00.418874 | orchestrator | d50df972282d registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-05 03:43:00.418890 | orchestrator | 5c5131782021 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-05 03:43:00.418904 | orchestrator | 57c032830311 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-05 03:43:00.418919 | orchestrator | 42e34adae32b registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 52 minutes (healthy) placement_api 2026-02-05 03:43:00.418934 | orchestrator | 451443fd65af registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-02-05 03:43:00.418949 | orchestrator | e4df8db55f4c registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-02-05 03:43:00.418964 | orchestrator | 34f94161e0f5 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-02-05 03:43:00.418979 | orchestrator | 3d50380f148b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-0 2026-02-05 03:43:00.418995 | orchestrator | fd10f17c4671 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-05 03:43:00.419010 | orchestrator | de37024be869 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-05 03:43:00.419021 | orchestrator | 0ef3c2e7deb6 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-05 03:43:00.419030 | orchestrator | 7ef536e74d30 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-05 03:43:00.419039 | orchestrator | 8226ade176ab registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-05 03:43:00.419048 | orchestrator | af719da2f5fc registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-05 03:43:00.419062 | orchestrator | ea7cb73f0e4f registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-05 03:43:00.419071 | orchestrator | 13a1dc638714 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-05 03:43:00.419087 | orchestrator | 4dcec77bf1a1 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-05 03:43:00.419103 | orchestrator | 4b41e4d6c284 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-05 03:43:00.419112 | orchestrator | f45053b4ac25 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-05 03:43:00.419121 | orchestrator | e68c41a81dc2 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-05 03:43:00.419130 | orchestrator | 3c598978ad7c registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-05 03:43:00.419139 | orchestrator | d89701ae4bbb registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-05 03:43:00.419147 | orchestrator | a5125f879f5c registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-05 03:43:00.419156 | orchestrator | 4933ae957d9c registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-05 03:43:00.419165 | orchestrator | 5464250a0b0b registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-05 03:43:00.419174 | orchestrator | 66e8d5b2f450 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-05 03:43:00.419182 | orchestrator | f2d002bdca32 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-05 03:43:00.419191 | orchestrator | eee5e423550a registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-05 03:43:00.419200 | orchestrator | 46e98ad610c8 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-05 03:43:00.763069 | orchestrator | 2026-02-05 03:43:00.763179 | orchestrator | ## Images @ testbed-node-0 2026-02-05 03:43:00.763198 | orchestrator | 2026-02-05 03:43:00.763211 | orchestrator | + echo 2026-02-05 03:43:00.763223 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-05 03:43:00.763235 | orchestrator | + echo 2026-02-05 03:43:00.763246 | orchestrator | + osism container testbed-node-0 images 2026-02-05 03:43:03.185658 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-05 03:43:03.185854 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-05 03:43:03.185873 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-05 03:43:03.185884 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-05 03:43:03.185894 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-05 03:43:03.185926 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-05 03:43:03.185938 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-05 03:43:03.185948 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-05 03:43:03.185957 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-05 03:43:03.185966 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-05 03:43:03.185976 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-05 03:43:03.185985 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-05 03:43:03.185994 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-05 03:43:03.186004 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-05 03:43:03.186060 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-05 03:43:03.186072 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-05 03:43:03.186082 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-05 03:43:03.186093 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-05 03:43:03.186103 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-05 03:43:03.186113 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-05 03:43:03.186123 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-05 03:43:03.186133 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-05 03:43:03.186144 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-05 03:43:03.186154 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-05 03:43:03.186165 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-05 03:43:03.186175 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-05 03:43:03.186185 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-05 03:43:03.186195 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-05 03:43:03.186211 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-05 03:43:03.186222 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-05 03:43:03.186233 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-05 03:43:03.186251 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-05 03:43:03.186279 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-05 03:43:03.186290 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-05 03:43:03.186300 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-05 03:43:03.186311 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-05 03:43:03.186321 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-05 03:43:03.186331 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-05 03:43:03.186341 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-05 03:43:03.186351 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-05 03:43:03.186362 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-05 03:43:03.186372 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-05 03:43:03.186383 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-05 03:43:03.186394 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-05 03:43:03.186404 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-05 03:43:03.186415 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-05 03:43:03.186425 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-05 03:43:03.186436 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-05 03:43:03.186446 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-05 03:43:03.186457 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-05 03:43:03.186467 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-05 03:43:03.186477 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-05 03:43:03.186487 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-05 03:43:03.186497 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-05 03:43:03.186506 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-05 03:43:03.186516 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-05 03:43:03.186526 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-05 03:43:03.186551 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-05 03:43:03.186561 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-05 03:43:03.186576 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-05 03:43:03.186587 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-05 03:43:03.186598 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-05 03:43:03.186608 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-05 03:43:03.186618 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-05 03:43:03.186634 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-05 03:43:03.186645 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-05 03:43:03.186656 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-05 03:43:03.186671 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-05 03:43:03.186680 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-05 03:43:03.186690 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-05 03:43:03.520373 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-05 03:43:03.521318 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-05 03:43:03.573969 | orchestrator | 2026-02-05 03:43:03.574143 | orchestrator | ## Containers @ testbed-node-1 2026-02-05 03:43:03.574173 | orchestrator | 2026-02-05 03:43:03.574191 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-05 03:43:03.574209 | orchestrator | + echo 2026-02-05 03:43:03.574226 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-05 03:43:03.574244 | orchestrator | + echo 2026-02-05 03:43:03.574263 | orchestrator | + osism container testbed-node-1 ps 2026-02-05 03:43:06.014460 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-05 03:43:06.014548 | orchestrator | fc8f93abab56 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_conductor 2026-02-05 03:43:06.014559 | orchestrator | 20d96aca7245 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-05 03:43:06.014568 | orchestrator | 9135f56798c3 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-02-05 03:43:06.014575 | orchestrator | 5047413799b3 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-02-05 03:43:06.014603 | orchestrator | b04b0d661c79 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-05 03:43:06.014611 | orchestrator | 46e2cb4e0d90 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-05 03:43:06.014657 | orchestrator | e42f371629b0 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-05 03:43:06.014671 | orchestrator | d35b8baf4e55 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_node_exporter 2026-02-05 03:43:06.014687 | orchestrator | 756d900e2128 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-05 03:43:06.014705 | orchestrator | 07d398bd5532 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-05 03:43:06.014718 | orchestrator | 043f5e5b57ed registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-05 03:43:06.014731 | orchestrator | c721a6edaa60 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-05 03:43:06.014835 | orchestrator | 2f3758dd2149 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-05 03:43:06.014851 | orchestrator | 3ade0291ffcc registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-05 03:43:06.014864 | orchestrator | 5c0d0dcd6880 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-05 03:43:06.014877 | orchestrator | 7cc7055ac603 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-05 03:43:06.014890 | orchestrator | de79c02f0eb2 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-05 03:43:06.014903 | orchestrator | cdabbab81b59 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-05 03:43:06.014912 | orchestrator | 30732f702b9a registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-05 03:43:06.014939 | orchestrator | b20d2c85bc0b registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-05 03:43:06.014947 | orchestrator | 1dfb92f82a00 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-05 03:43:06.014954 | orchestrator | f22dddedb153 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-05 03:43:06.014962 | orchestrator | e5c830384be1 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-05 03:43:06.014969 | orchestrator | 193b152d04f6 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-05 03:43:06.014985 | orchestrator | 6887fef51dd8 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-02-05 03:43:06.014992 | orchestrator | b9dd20b90c0a registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-05 03:43:06.014999 | orchestrator | 9a113bac4eb3 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-05 03:43:06.015008 | orchestrator | 0ca710f05354 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-05 03:43:06.015017 | orchestrator | b0a2c99bdda6 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-05 03:43:06.015026 | orchestrator | 8996a48d1581 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-05 03:43:06.015034 | orchestrator | 7c0c3e31ff31 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-05 03:43:06.015043 | orchestrator | 3275dd541367 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-05 03:43:06.015053 | orchestrator | da642bbb1394 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-05 03:43:06.015061 | orchestrator | d9a10882608b registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-05 03:43:06.015070 | orchestrator | ef7eace14931 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-02-05 03:43:06.015079 | orchestrator | 8aef05de09a4 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-05 03:43:06.015094 | orchestrator | b2a898a3fcf8 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-05 03:43:06.015104 | orchestrator | e90b731e04a5 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-02-05 03:43:06.015117 | orchestrator | 154bc25f6c1b registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-05 03:43:06.015144 | orchestrator | 032e000bb78b registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-05 03:43:06.015157 | orchestrator | d53d25c61dd1 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-05 03:43:06.015176 | orchestrator | 97dbbfc9ce7e registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-05 03:43:06.015188 | orchestrator | 79b25515edd5 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-05 03:43:06.015215 | orchestrator | 38ba5c3857e4 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-05 03:43:06.015238 | orchestrator | 574e418900fd registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-05 03:43:06.015250 | orchestrator | 8b6c86e76fbe registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-02-05 03:43:06.016027 | orchestrator | 5d33bb12253e registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-02-05 03:43:06.016117 | orchestrator | 9de080737ad1 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-02-05 03:43:06.016139 | orchestrator | 5d468c434106 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-05 03:43:06.016158 | orchestrator | 916b0571ffcc registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-1 2026-02-05 03:43:06.016177 | orchestrator | 311ab76c0775 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-05 03:43:06.016194 | orchestrator | df4012ab4a61 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-05 03:43:06.016212 | orchestrator | 16dcc545a518 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-05 03:43:06.016229 | orchestrator | 674fd183cc9e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-05 03:43:06.016246 | orchestrator | c75ec7fca049 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-05 03:43:06.016263 | orchestrator | 5ce94efff598 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-05 03:43:06.016280 | orchestrator | 5a6c05ef6a86 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-05 03:43:06.016297 | orchestrator | 178a776df07a registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-05 03:43:06.016314 | orchestrator | 2e8ad9a70f6a registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-05 03:43:06.016348 | orchestrator | 089e98636b05 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-05 03:43:06.016363 | orchestrator | b2f621bd122d registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-05 03:43:06.016377 | orchestrator | 1e70a52ee039 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-05 03:43:06.016391 | orchestrator | c088f1b42641 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-05 03:43:06.016404 | orchestrator | e1930097f2d8 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-05 03:43:06.016429 | orchestrator | 26d202b6670c registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-05 03:43:06.016443 | orchestrator | 596a4c7d2c8b registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-05 03:43:06.016457 | orchestrator | aac43b469fc5 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-05 03:43:06.016487 | orchestrator | 0e941739706e registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-05 03:43:06.016502 | orchestrator | 22b1ac4d3766 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-05 03:43:06.016522 | orchestrator | 61de264d4427 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-05 03:43:06.016537 | orchestrator | 50b5b77cb8d5 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-05 03:43:06.336065 | orchestrator | 2026-02-05 03:43:06.336154 | orchestrator | ## Images @ testbed-node-1 2026-02-05 03:43:06.336167 | orchestrator | 2026-02-05 03:43:06.336176 | orchestrator | + echo 2026-02-05 03:43:06.336185 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-05 03:43:06.336194 | orchestrator | + echo 2026-02-05 03:43:06.336202 | orchestrator | + osism container testbed-node-1 images 2026-02-05 03:43:08.719577 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-05 03:43:08.719688 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-05 03:43:08.719705 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-05 03:43:08.719717 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-05 03:43:08.719729 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-05 03:43:08.719741 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-05 03:43:08.719752 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-05 03:43:08.719840 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-05 03:43:08.719852 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-05 03:43:08.719863 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-05 03:43:08.719874 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-05 03:43:08.719885 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-05 03:43:08.719896 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-05 03:43:08.719907 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-05 03:43:08.719918 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-05 03:43:08.719928 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-05 03:43:08.719939 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-05 03:43:08.719950 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-05 03:43:08.719961 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-05 03:43:08.719972 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-05 03:43:08.719982 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-05 03:43:08.719993 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-05 03:43:08.720004 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-05 03:43:08.720015 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-05 03:43:08.720025 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-05 03:43:08.720036 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-05 03:43:08.720047 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-05 03:43:08.720058 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-05 03:43:08.720069 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-05 03:43:08.720080 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-05 03:43:08.720090 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-05 03:43:08.720103 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-05 03:43:08.720136 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-05 03:43:08.720158 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-05 03:43:08.720171 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-05 03:43:08.720185 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-05 03:43:08.720198 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-05 03:43:08.720212 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-05 03:43:08.720243 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-05 03:43:08.720257 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-05 03:43:08.720270 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-05 03:43:08.720281 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-05 03:43:08.720292 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-05 03:43:08.720303 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-05 03:43:08.720314 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-05 03:43:08.720325 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-05 03:43:08.720336 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-05 03:43:08.720346 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-05 03:43:08.720357 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-05 03:43:08.720368 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-05 03:43:08.720379 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-05 03:43:08.720390 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-05 03:43:08.720401 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-05 03:43:08.720413 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-05 03:43:08.720432 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-05 03:43:08.720451 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-05 03:43:08.720476 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-05 03:43:08.720503 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-05 03:43:08.720522 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-05 03:43:08.720542 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-05 03:43:08.720574 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-05 03:43:08.720594 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-05 03:43:08.720614 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-05 03:43:08.720633 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-05 03:43:08.720655 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-05 03:43:08.720667 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-05 03:43:08.720678 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-05 03:43:08.720689 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-05 03:43:08.720700 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-05 03:43:08.720711 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-05 03:43:09.034522 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-05 03:43:09.034875 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-05 03:43:09.092795 | orchestrator | 2026-02-05 03:43:09.092859 | orchestrator | ## Containers @ testbed-node-2 2026-02-05 03:43:09.092865 | orchestrator | 2026-02-05 03:43:09.092870 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-05 03:43:09.092874 | orchestrator | + echo 2026-02-05 03:43:09.092878 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-05 03:43:09.092883 | orchestrator | + echo 2026-02-05 03:43:09.092887 | orchestrator | + osism container testbed-node-2 ps 2026-02-05 03:43:11.584247 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-05 03:43:11.584323 | orchestrator | 29289e9ea92d registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_conductor 2026-02-05 03:43:11.584333 | orchestrator | 8253cd17ae4a registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-05 03:43:11.584349 | orchestrator | 47ad085051cb registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-02-05 03:43:11.584359 | orchestrator | 2eccd6b8326f registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-02-05 03:43:11.584371 | orchestrator | 564b7b7c1437 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-05 03:43:11.584381 | orchestrator | 5e2c34eb3f57 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-05 03:43:11.584391 | orchestrator | eb071f074001 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_mysqld_exporter 2026-02-05 03:43:11.584401 | orchestrator | 47d4161e8d56 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_node_exporter 2026-02-05 03:43:11.584429 | orchestrator | c7b287bd24d0 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-05 03:43:11.584440 | orchestrator | 4ef314ea6ea9 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-05 03:43:11.584450 | orchestrator | dc739b16764e registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-05 03:43:11.584459 | orchestrator | 4c5c8b6d2948 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-05 03:43:11.584479 | orchestrator | 5123f4fd861b registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-05 03:43:11.584488 | orchestrator | 9b7bddc7a117 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-05 03:43:11.584497 | orchestrator | 18e03271d246 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-05 03:43:11.584506 | orchestrator | 815708e651cc registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-05 03:43:11.584515 | orchestrator | f0431e7c832a registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-05 03:43:11.584524 | orchestrator | c566fcee5a3e registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-05 03:43:11.584534 | orchestrator | 5f13235e496a registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-05 03:43:11.584559 | orchestrator | 4dde13eb7da3 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-05 03:43:11.584569 | orchestrator | 120e996a3472 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-05 03:43:11.584578 | orchestrator | 9d276cbd552a registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-05 03:43:11.584583 | orchestrator | d993d19b14db registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-05 03:43:11.584589 | orchestrator | cb583861d656 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-05 03:43:11.584594 | orchestrator | a521bb55002f registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-02-05 03:43:11.584606 | orchestrator | 63b35d85027c registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-05 03:43:11.584612 | orchestrator | eb0c46253b51 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-05 03:43:11.584617 | orchestrator | f06b0fc5bea0 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-05 03:43:11.584623 | orchestrator | 66817554c6b9 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-05 03:43:11.584628 | orchestrator | 048dfab2b267 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-05 03:43:11.584633 | orchestrator | 4b676808a8a8 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-05 03:43:11.584639 | orchestrator | bca780df4910 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-05 03:43:11.584644 | orchestrator | 312ee393859e registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-05 03:43:11.584650 | orchestrator | 42f434357dd2 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-05 03:43:11.584655 | orchestrator | 8361e22a1347 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-02-05 03:43:11.584661 | orchestrator | dc221786d909 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-05 03:43:11.584666 | orchestrator | a9ab69297f38 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-05 03:43:11.584672 | orchestrator | a4b1a9d9cd03 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-02-05 03:43:11.584677 | orchestrator | e5c64667c8f0 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-05 03:43:11.584688 | orchestrator | dbb3cc25c9b3 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-05 03:43:11.584693 | orchestrator | a0a5cb045938 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-05 03:43:11.584699 | orchestrator | 81e314543b6e registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-05 03:43:11.584704 | orchestrator | 9d62ab3edc33 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-05 03:43:11.584714 | orchestrator | 714e5a98c95b registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-05 03:43:11.584720 | orchestrator | bd50acfc0c9d registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-05 03:43:11.584725 | orchestrator | 8df6eccebd8b registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-02-05 03:43:11.584731 | orchestrator | 87a9b8fcc8fe registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-02-05 03:43:11.584736 | orchestrator | 6f708376acd1 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-02-05 03:43:11.584742 | orchestrator | 64853d3773c2 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-05 03:43:11.584747 | orchestrator | a78bf839f2e0 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-2 2026-02-05 03:43:11.584774 | orchestrator | 48b8de5322a7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-05 03:43:11.584784 | orchestrator | 458f6feaf079 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-05 03:43:11.584789 | orchestrator | 6f5d9096a6c9 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-05 03:43:11.584798 | orchestrator | 016af42515c9 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-05 03:43:11.584805 | orchestrator | f211cef654b8 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-05 03:43:11.584811 | orchestrator | 197cd8c1dea1 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-05 03:43:11.584817 | orchestrator | 19511cac8508 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-05 03:43:11.584824 | orchestrator | 2813ffc0de3c registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-05 03:43:11.584830 | orchestrator | 4b0682a83f96 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-05 03:43:11.584841 | orchestrator | 8b35209bdbcd registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-05 03:43:11.584848 | orchestrator | 21f07e3238fa registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-05 03:43:11.584858 | orchestrator | 5aa22fa7d6f3 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-05 03:43:11.584865 | orchestrator | 80fb3242ab7e registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-05 03:43:11.584871 | orchestrator | bcabd7ed9d88 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-05 03:43:11.584877 | orchestrator | ea8159303f6e registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-05 03:43:11.584884 | orchestrator | 78453e5b020c registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-05 03:43:11.584890 | orchestrator | 5ef926c92622 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-05 03:43:11.584897 | orchestrator | 0de2520f5821 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-05 03:43:11.584904 | orchestrator | 68fb38f6cab6 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-05 03:43:11.584910 | orchestrator | c5f201178029 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-05 03:43:11.584916 | orchestrator | c124d5b04009 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-05 03:43:11.916394 | orchestrator | 2026-02-05 03:43:11.916493 | orchestrator | ## Images @ testbed-node-2 2026-02-05 03:43:11.916509 | orchestrator | 2026-02-05 03:43:11.916520 | orchestrator | + echo 2026-02-05 03:43:11.916532 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-05 03:43:11.916544 | orchestrator | + echo 2026-02-05 03:43:11.916556 | orchestrator | + osism container testbed-node-2 images 2026-02-05 03:43:14.375320 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-05 03:43:14.375429 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-05 03:43:14.375445 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-05 03:43:14.375457 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-05 03:43:14.375485 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-05 03:43:14.375497 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-05 03:43:14.375508 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-05 03:43:14.375518 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-05 03:43:14.375529 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-05 03:43:14.375560 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-05 03:43:14.375572 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-05 03:43:14.375588 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-05 03:43:14.375599 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-05 03:43:14.375610 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-05 03:43:14.375622 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-05 03:43:14.375632 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-05 03:43:14.375643 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-05 03:43:14.375654 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-05 03:43:14.375665 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-05 03:43:14.375675 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-05 03:43:14.375686 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-05 03:43:14.375696 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-05 03:43:14.375707 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-05 03:43:14.375718 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-05 03:43:14.375729 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-05 03:43:14.375739 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-05 03:43:14.375750 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-05 03:43:14.375833 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-05 03:43:14.375845 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-05 03:43:14.375858 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-05 03:43:14.375870 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-05 03:43:14.375883 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-05 03:43:14.375923 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-05 03:43:14.375943 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-05 03:43:14.375962 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-05 03:43:14.375980 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-05 03:43:14.376014 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-05 03:43:14.376028 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-05 03:43:14.376040 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-05 03:43:14.376070 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-05 03:43:14.376084 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-05 03:43:14.376097 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-05 03:43:14.376109 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-05 03:43:14.376122 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-05 03:43:14.376134 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-05 03:43:14.376146 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-05 03:43:14.376158 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-05 03:43:14.376170 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-05 03:43:14.376182 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-05 03:43:14.376195 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-05 03:43:14.376208 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-05 03:43:14.376221 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-05 03:43:14.376231 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-05 03:43:14.376242 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-05 03:43:14.376253 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-05 03:43:14.376263 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-05 03:43:14.376274 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-05 03:43:14.376284 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-05 03:43:14.376295 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-05 03:43:14.376305 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-05 03:43:14.376316 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-05 03:43:14.376327 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-05 03:43:14.376383 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-05 03:43:14.376396 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-05 03:43:14.376418 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-05 03:43:14.376429 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-05 03:43:14.376440 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-05 03:43:14.376451 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-05 03:43:14.376467 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-05 03:43:14.376478 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-05 03:43:14.708419 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-05 03:43:14.715577 | orchestrator | + set -e 2026-02-05 03:43:14.716236 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 03:43:14.716285 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 03:43:14.716299 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 03:43:14.716312 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 03:43:14.716325 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 03:43:14.716338 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 03:43:14.716353 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 03:43:14.716366 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 03:43:14.716379 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 03:43:14.716397 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 03:43:14.716416 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 03:43:14.716434 | orchestrator | ++ export ARA=false 2026-02-05 03:43:14.716452 | orchestrator | ++ ARA=false 2026-02-05 03:43:14.716470 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 03:43:14.716488 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 03:43:14.716507 | orchestrator | ++ export TEMPEST=false 2026-02-05 03:43:14.716525 | orchestrator | ++ TEMPEST=false 2026-02-05 03:43:14.716544 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 03:43:14.716564 | orchestrator | ++ IS_ZUUL=true 2026-02-05 03:43:14.716582 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 03:43:14.716601 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 03:43:14.716619 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 03:43:14.716638 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 03:43:14.716656 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 03:43:14.716674 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 03:43:14.716687 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 03:43:14.716698 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 03:43:14.716709 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 03:43:14.716726 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 03:43:14.716744 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-05 03:43:14.716796 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-05 03:43:14.725506 | orchestrator | + set -e 2026-02-05 03:43:14.725583 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 03:43:14.725596 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 03:43:14.725610 | orchestrator | ++ INTERACTIVE=false 2026-02-05 03:43:14.725620 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 03:43:14.725631 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 03:43:14.725745 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-05 03:43:14.726301 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-05 03:43:14.729421 | orchestrator | 2026-02-05 03:43:14.729517 | orchestrator | # Ceph status 2026-02-05 03:43:14.729537 | orchestrator | 2026-02-05 03:43:14.729553 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 03:43:14.729570 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 03:43:14.729584 | orchestrator | + echo 2026-02-05 03:43:14.729600 | orchestrator | + echo '# Ceph status' 2026-02-05 03:43:14.729645 | orchestrator | + echo 2026-02-05 03:43:14.729661 | orchestrator | + ceph -s 2026-02-05 03:43:15.333220 | orchestrator | cluster: 2026-02-05 03:43:15.333307 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-05 03:43:15.333320 | orchestrator | health: HEALTH_OK 2026-02-05 03:43:15.333330 | orchestrator | 2026-02-05 03:43:15.333340 | orchestrator | services: 2026-02-05 03:43:15.333348 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 69m) 2026-02-05 03:43:15.333364 | orchestrator | mgr: testbed-node-2(active, since 57m), standbys: testbed-node-1, testbed-node-0 2026-02-05 03:43:15.333374 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-05 03:43:15.333384 | orchestrator | osd: 6 osds: 6 up (since 65m), 6 in (since 66m) 2026-02-05 03:43:15.333394 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-05 03:43:15.333402 | orchestrator | 2026-02-05 03:43:15.333411 | orchestrator | data: 2026-02-05 03:43:15.333421 | orchestrator | volumes: 1/1 healthy 2026-02-05 03:43:15.333431 | orchestrator | pools: 14 pools, 401 pgs 2026-02-05 03:43:15.333438 | orchestrator | objects: 556 objects, 2.2 GiB 2026-02-05 03:43:15.333443 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-02-05 03:43:15.333449 | orchestrator | pgs: 401 active+clean 2026-02-05 03:43:15.333455 | orchestrator | 2026-02-05 03:43:15.374405 | orchestrator | 2026-02-05 03:43:15.374527 | orchestrator | # Ceph versions 2026-02-05 03:43:15.374552 | orchestrator | 2026-02-05 03:43:15.374564 | orchestrator | + echo 2026-02-05 03:43:15.374576 | orchestrator | + echo '# Ceph versions' 2026-02-05 03:43:15.374588 | orchestrator | + echo 2026-02-05 03:43:15.374599 | orchestrator | + ceph versions 2026-02-05 03:43:16.016951 | orchestrator | { 2026-02-05 03:43:16.017057 | orchestrator | "mon": { 2026-02-05 03:43:16.017074 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-05 03:43:16.017087 | orchestrator | }, 2026-02-05 03:43:16.017099 | orchestrator | "mgr": { 2026-02-05 03:43:16.017111 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-05 03:43:16.017122 | orchestrator | }, 2026-02-05 03:43:16.017133 | orchestrator | "osd": { 2026-02-05 03:43:16.017144 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-05 03:43:16.017155 | orchestrator | }, 2026-02-05 03:43:16.017166 | orchestrator | "mds": { 2026-02-05 03:43:16.017177 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-05 03:43:16.017188 | orchestrator | }, 2026-02-05 03:43:16.017199 | orchestrator | "rgw": { 2026-02-05 03:43:16.017210 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-05 03:43:16.017227 | orchestrator | }, 2026-02-05 03:43:16.017244 | orchestrator | "overall": { 2026-02-05 03:43:16.017256 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-05 03:43:16.017267 | orchestrator | } 2026-02-05 03:43:16.017278 | orchestrator | } 2026-02-05 03:43:16.061304 | orchestrator | 2026-02-05 03:43:16.061402 | orchestrator | # Ceph OSD tree 2026-02-05 03:43:16.061417 | orchestrator | 2026-02-05 03:43:16.061427 | orchestrator | + echo 2026-02-05 03:43:16.061437 | orchestrator | + echo '# Ceph OSD tree' 2026-02-05 03:43:16.061448 | orchestrator | + echo 2026-02-05 03:43:16.061458 | orchestrator | + ceph osd df tree 2026-02-05 03:43:16.532035 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-05 03:43:16.532188 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 394 MiB 113 GiB 5.89 1.00 - root default 2026-02-05 03:43:16.532208 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2026-02-05 03:43:16.532221 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 66 MiB 19 GiB 5.89 1.00 190 up osd.1 2026-02-05 03:43:16.532232 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.93 1.01 202 up osd.4 2026-02-05 03:43:16.532243 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-4 2026-02-05 03:43:16.532254 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.69 1.14 191 up osd.2 2026-02-05 03:43:16.532294 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 971 MiB 1 KiB 66 MiB 19 GiB 5.07 0.86 197 up osd.5 2026-02-05 03:43:16.532306 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-5 2026-02-05 03:43:16.532318 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.0 GiB 1003 MiB 1 KiB 62 MiB 19 GiB 5.20 0.88 189 up osd.0 2026-02-05 03:43:16.532330 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 66 MiB 19 GiB 6.55 1.11 201 up osd.3 2026-02-05 03:43:16.532341 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 394 MiB 113 GiB 5.89 2026-02-05 03:43:16.532353 | orchestrator | MIN/MAX VAR: 0.86/1.14 STDDEV: 0.61 2026-02-05 03:43:16.582459 | orchestrator | 2026-02-05 03:43:16.582595 | orchestrator | # Ceph monitor status 2026-02-05 03:43:16.582625 | orchestrator | 2026-02-05 03:43:16.582645 | orchestrator | + echo 2026-02-05 03:43:16.582657 | orchestrator | + echo '# Ceph monitor status' 2026-02-05 03:43:16.582672 | orchestrator | + echo 2026-02-05 03:43:16.582691 | orchestrator | + ceph mon stat 2026-02-05 03:43:17.176044 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-02-05 03:43:17.219860 | orchestrator | + echo 2026-02-05 03:43:17.220429 | orchestrator | 2026-02-05 03:43:17.220445 | orchestrator | # Ceph quorum status 2026-02-05 03:43:17.220451 | orchestrator | 2026-02-05 03:43:17.220456 | orchestrator | + echo '# Ceph quorum status' 2026-02-05 03:43:17.220461 | orchestrator | + echo 2026-02-05 03:43:17.220466 | orchestrator | + ceph quorum_status 2026-02-05 03:43:17.220875 | orchestrator | + jq 2026-02-05 03:43:17.866900 | orchestrator | { 2026-02-05 03:43:17.867006 | orchestrator | "election_epoch": 4, 2026-02-05 03:43:17.867029 | orchestrator | "quorum": [ 2026-02-05 03:43:17.867046 | orchestrator | 0, 2026-02-05 03:43:17.867063 | orchestrator | 1, 2026-02-05 03:43:17.867080 | orchestrator | 2 2026-02-05 03:43:17.867095 | orchestrator | ], 2026-02-05 03:43:17.867111 | orchestrator | "quorum_names": [ 2026-02-05 03:43:17.867127 | orchestrator | "testbed-node-0", 2026-02-05 03:43:17.867142 | orchestrator | "testbed-node-1", 2026-02-05 03:43:17.867156 | orchestrator | "testbed-node-2" 2026-02-05 03:43:17.867169 | orchestrator | ], 2026-02-05 03:43:17.867184 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-02-05 03:43:17.867201 | orchestrator | "quorum_age": 4171, 2026-02-05 03:43:17.867217 | orchestrator | "features": { 2026-02-05 03:43:17.867231 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-05 03:43:17.867246 | orchestrator | "quorum_mon": [ 2026-02-05 03:43:17.867262 | orchestrator | "kraken", 2026-02-05 03:43:17.867277 | orchestrator | "luminous", 2026-02-05 03:43:17.867295 | orchestrator | "mimic", 2026-02-05 03:43:17.867309 | orchestrator | "osdmap-prune", 2026-02-05 03:43:17.867322 | orchestrator | "nautilus", 2026-02-05 03:43:17.867336 | orchestrator | "octopus", 2026-02-05 03:43:17.867351 | orchestrator | "pacific", 2026-02-05 03:43:17.867366 | orchestrator | "elector-pinging", 2026-02-05 03:43:17.867381 | orchestrator | "quincy", 2026-02-05 03:43:17.867396 | orchestrator | "reef" 2026-02-05 03:43:17.867412 | orchestrator | ] 2026-02-05 03:43:17.867429 | orchestrator | }, 2026-02-05 03:43:17.867444 | orchestrator | "monmap": { 2026-02-05 03:43:17.867460 | orchestrator | "epoch": 1, 2026-02-05 03:43:17.867476 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-05 03:43:17.867495 | orchestrator | "modified": "2026-02-05T02:33:33.505926Z", 2026-02-05 03:43:17.867512 | orchestrator | "created": "2026-02-05T02:33:33.505926Z", 2026-02-05 03:43:17.867527 | orchestrator | "min_mon_release": 18, 2026-02-05 03:43:17.867544 | orchestrator | "min_mon_release_name": "reef", 2026-02-05 03:43:17.867560 | orchestrator | "election_strategy": 1, 2026-02-05 03:43:17.867578 | orchestrator | "disallowed_leaders: ": "", 2026-02-05 03:43:17.867595 | orchestrator | "stretch_mode": false, 2026-02-05 03:43:17.867613 | orchestrator | "tiebreaker_mon": "", 2026-02-05 03:43:17.867629 | orchestrator | "removed_ranks: ": "", 2026-02-05 03:43:17.867647 | orchestrator | "features": { 2026-02-05 03:43:17.867664 | orchestrator | "persistent": [ 2026-02-05 03:43:17.867682 | orchestrator | "kraken", 2026-02-05 03:43:17.867736 | orchestrator | "luminous", 2026-02-05 03:43:17.867749 | orchestrator | "mimic", 2026-02-05 03:43:17.867862 | orchestrator | "osdmap-prune", 2026-02-05 03:43:17.867874 | orchestrator | "nautilus", 2026-02-05 03:43:17.867884 | orchestrator | "octopus", 2026-02-05 03:43:17.867893 | orchestrator | "pacific", 2026-02-05 03:43:17.867903 | orchestrator | "elector-pinging", 2026-02-05 03:43:17.867912 | orchestrator | "quincy", 2026-02-05 03:43:17.867922 | orchestrator | "reef" 2026-02-05 03:43:17.867931 | orchestrator | ], 2026-02-05 03:43:17.867947 | orchestrator | "optional": [] 2026-02-05 03:43:17.867962 | orchestrator | }, 2026-02-05 03:43:17.868002 | orchestrator | "mons": [ 2026-02-05 03:43:17.868022 | orchestrator | { 2026-02-05 03:43:17.868038 | orchestrator | "rank": 0, 2026-02-05 03:43:17.868054 | orchestrator | "name": "testbed-node-0", 2026-02-05 03:43:17.868064 | orchestrator | "public_addrs": { 2026-02-05 03:43:17.868074 | orchestrator | "addrvec": [ 2026-02-05 03:43:17.868083 | orchestrator | { 2026-02-05 03:43:17.868093 | orchestrator | "type": "v2", 2026-02-05 03:43:17.868105 | orchestrator | "addr": "192.168.16.10:3300", 2026-02-05 03:43:17.868123 | orchestrator | "nonce": 0 2026-02-05 03:43:17.868139 | orchestrator | }, 2026-02-05 03:43:17.868155 | orchestrator | { 2026-02-05 03:43:17.868171 | orchestrator | "type": "v1", 2026-02-05 03:43:17.868187 | orchestrator | "addr": "192.168.16.10:6789", 2026-02-05 03:43:17.868203 | orchestrator | "nonce": 0 2026-02-05 03:43:17.868218 | orchestrator | } 2026-02-05 03:43:17.868232 | orchestrator | ] 2026-02-05 03:43:17.868246 | orchestrator | }, 2026-02-05 03:43:17.868262 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-02-05 03:43:17.868279 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-02-05 03:43:17.868296 | orchestrator | "priority": 0, 2026-02-05 03:43:17.868309 | orchestrator | "weight": 0, 2026-02-05 03:43:17.868319 | orchestrator | "crush_location": "{}" 2026-02-05 03:43:17.868329 | orchestrator | }, 2026-02-05 03:43:17.868338 | orchestrator | { 2026-02-05 03:43:17.868348 | orchestrator | "rank": 1, 2026-02-05 03:43:17.868358 | orchestrator | "name": "testbed-node-1", 2026-02-05 03:43:17.868367 | orchestrator | "public_addrs": { 2026-02-05 03:43:17.868377 | orchestrator | "addrvec": [ 2026-02-05 03:43:17.868386 | orchestrator | { 2026-02-05 03:43:17.868396 | orchestrator | "type": "v2", 2026-02-05 03:43:17.868405 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-05 03:43:17.868415 | orchestrator | "nonce": 0 2026-02-05 03:43:17.868425 | orchestrator | }, 2026-02-05 03:43:17.868434 | orchestrator | { 2026-02-05 03:43:17.868444 | orchestrator | "type": "v1", 2026-02-05 03:43:17.868453 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-05 03:43:17.868463 | orchestrator | "nonce": 0 2026-02-05 03:43:17.868478 | orchestrator | } 2026-02-05 03:43:17.868495 | orchestrator | ] 2026-02-05 03:43:17.868511 | orchestrator | }, 2026-02-05 03:43:17.868528 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-05 03:43:17.868547 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-05 03:43:17.868563 | orchestrator | "priority": 0, 2026-02-05 03:43:17.868579 | orchestrator | "weight": 0, 2026-02-05 03:43:17.868589 | orchestrator | "crush_location": "{}" 2026-02-05 03:43:17.868599 | orchestrator | }, 2026-02-05 03:43:17.868611 | orchestrator | { 2026-02-05 03:43:17.868628 | orchestrator | "rank": 2, 2026-02-05 03:43:17.868645 | orchestrator | "name": "testbed-node-2", 2026-02-05 03:43:17.868661 | orchestrator | "public_addrs": { 2026-02-05 03:43:17.868678 | orchestrator | "addrvec": [ 2026-02-05 03:43:17.868695 | orchestrator | { 2026-02-05 03:43:17.868712 | orchestrator | "type": "v2", 2026-02-05 03:43:17.868728 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-05 03:43:17.868745 | orchestrator | "nonce": 0 2026-02-05 03:43:17.868784 | orchestrator | }, 2026-02-05 03:43:17.868800 | orchestrator | { 2026-02-05 03:43:17.868817 | orchestrator | "type": "v1", 2026-02-05 03:43:17.868834 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-05 03:43:17.868851 | orchestrator | "nonce": 0 2026-02-05 03:43:17.868868 | orchestrator | } 2026-02-05 03:43:17.868884 | orchestrator | ] 2026-02-05 03:43:17.868897 | orchestrator | }, 2026-02-05 03:43:17.868907 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-05 03:43:17.868917 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-05 03:43:17.868927 | orchestrator | "priority": 0, 2026-02-05 03:43:17.868948 | orchestrator | "weight": 0, 2026-02-05 03:43:17.868958 | orchestrator | "crush_location": "{}" 2026-02-05 03:43:17.868968 | orchestrator | } 2026-02-05 03:43:17.868977 | orchestrator | ] 2026-02-05 03:43:17.868987 | orchestrator | } 2026-02-05 03:43:17.868997 | orchestrator | } 2026-02-05 03:43:17.869006 | orchestrator | 2026-02-05 03:43:17.869016 | orchestrator | # Ceph free space status 2026-02-05 03:43:17.869026 | orchestrator | 2026-02-05 03:43:17.869035 | orchestrator | + echo 2026-02-05 03:43:17.869045 | orchestrator | + echo '# Ceph free space status' 2026-02-05 03:43:17.869055 | orchestrator | + echo 2026-02-05 03:43:17.869064 | orchestrator | + ceph df 2026-02-05 03:43:18.455242 | orchestrator | --- RAW STORAGE --- 2026-02-05 03:43:18.455366 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-05 03:43:18.455407 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-02-05 03:43:18.455444 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-02-05 03:43:18.455465 | orchestrator | 2026-02-05 03:43:18.455484 | orchestrator | --- POOLS --- 2026-02-05 03:43:18.455503 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-05 03:43:18.455522 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-02-05 03:43:18.455541 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-05 03:43:18.455560 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-05 03:43:18.455579 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-05 03:43:18.455597 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-05 03:43:18.455617 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-05 03:43:18.455636 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-05 03:43:18.455654 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-05 03:43:18.455671 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-02-05 03:43:18.455689 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-05 03:43:18.455707 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-05 03:43:18.455725 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.92 35 GiB 2026-02-05 03:43:18.455744 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-05 03:43:18.455794 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-05 03:43:18.506219 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-05 03:43:18.545323 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-05 03:43:18.545435 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-05 03:43:18.545453 | orchestrator | + osism apply facts 2026-02-05 03:43:20.722985 | orchestrator | 2026-02-05 03:43:20 | INFO  | Task aa02ca13-4833-4ee5-adda-46a9ff372dd6 (facts) was prepared for execution. 2026-02-05 03:43:20.723107 | orchestrator | 2026-02-05 03:43:20 | INFO  | It takes a moment until task aa02ca13-4833-4ee5-adda-46a9ff372dd6 (facts) has been started and output is visible here. 2026-02-05 03:43:34.352806 | orchestrator | 2026-02-05 03:43:34.352928 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-05 03:43:34.352944 | orchestrator | 2026-02-05 03:43:34.352953 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-05 03:43:34.352962 | orchestrator | Thursday 05 February 2026 03:43:25 +0000 (0:00:00.273) 0:00:00.273 ***** 2026-02-05 03:43:34.352970 | orchestrator | ok: [testbed-manager] 2026-02-05 03:43:34.352979 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:43:34.352987 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:43:34.352995 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:43:34.353003 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:43:34.353011 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:43:34.353019 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:43:34.353027 | orchestrator | 2026-02-05 03:43:34.353035 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-05 03:43:34.353065 | orchestrator | Thursday 05 February 2026 03:43:26 +0000 (0:00:01.190) 0:00:01.464 ***** 2026-02-05 03:43:34.353074 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:43:34.353083 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:43:34.353091 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:43:34.353099 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:43:34.353107 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:43:34.353115 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:43:34.353122 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:43:34.353130 | orchestrator | 2026-02-05 03:43:34.353138 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 03:43:34.353146 | orchestrator | 2026-02-05 03:43:34.353154 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 03:43:34.353162 | orchestrator | Thursday 05 February 2026 03:43:27 +0000 (0:00:01.350) 0:00:02.814 ***** 2026-02-05 03:43:34.353169 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:43:34.353177 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:43:34.353185 | orchestrator | ok: [testbed-manager] 2026-02-05 03:43:34.353193 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:43:34.353200 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:43:34.353208 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:43:34.353216 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:43:34.353224 | orchestrator | 2026-02-05 03:43:34.353232 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-05 03:43:34.353240 | orchestrator | 2026-02-05 03:43:34.353248 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-05 03:43:34.353256 | orchestrator | Thursday 05 February 2026 03:43:33 +0000 (0:00:05.562) 0:00:08.377 ***** 2026-02-05 03:43:34.353264 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:43:34.353272 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:43:34.353280 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:43:34.353287 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:43:34.353295 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:43:34.353303 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:43:34.353311 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:43:34.353321 | orchestrator | 2026-02-05 03:43:34.353330 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:43:34.353340 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:43:34.353350 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:43:34.353360 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:43:34.353383 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:43:34.353392 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:43:34.353401 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:43:34.353410 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:43:34.353419 | orchestrator | 2026-02-05 03:43:34.353427 | orchestrator | 2026-02-05 03:43:34.353435 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:43:34.353443 | orchestrator | Thursday 05 February 2026 03:43:33 +0000 (0:00:00.563) 0:00:08.940 ***** 2026-02-05 03:43:34.353451 | orchestrator | =============================================================================== 2026-02-05 03:43:34.353459 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.56s 2026-02-05 03:43:34.353473 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.35s 2026-02-05 03:43:34.353482 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.19s 2026-02-05 03:43:34.353490 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-02-05 03:43:34.691927 | orchestrator | + osism validate ceph-mons 2026-02-05 03:44:07.796739 | orchestrator | 2026-02-05 03:44:07.796876 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-05 03:44:07.796898 | orchestrator | 2026-02-05 03:44:07.796912 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-05 03:44:07.796927 | orchestrator | Thursday 05 February 2026 03:43:51 +0000 (0:00:00.443) 0:00:00.443 ***** 2026-02-05 03:44:07.796939 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:07.796946 | orchestrator | 2026-02-05 03:44:07.796954 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-05 03:44:07.796961 | orchestrator | Thursday 05 February 2026 03:43:52 +0000 (0:00:00.827) 0:00:01.271 ***** 2026-02-05 03:44:07.796969 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:07.796976 | orchestrator | 2026-02-05 03:44:07.796984 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-05 03:44:07.796995 | orchestrator | Thursday 05 February 2026 03:43:53 +0000 (0:00:01.045) 0:00:02.317 ***** 2026-02-05 03:44:07.797008 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797020 | orchestrator | 2026-02-05 03:44:07.797031 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-05 03:44:07.797043 | orchestrator | Thursday 05 February 2026 03:43:53 +0000 (0:00:00.135) 0:00:02.452 ***** 2026-02-05 03:44:07.797056 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797070 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:44:07.797082 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:44:07.797093 | orchestrator | 2026-02-05 03:44:07.797101 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-05 03:44:07.797108 | orchestrator | Thursday 05 February 2026 03:43:54 +0000 (0:00:00.309) 0:00:02.762 ***** 2026-02-05 03:44:07.797115 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:44:07.797122 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:44:07.797129 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797137 | orchestrator | 2026-02-05 03:44:07.797144 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-05 03:44:07.797151 | orchestrator | Thursday 05 February 2026 03:43:55 +0000 (0:00:01.055) 0:00:03.818 ***** 2026-02-05 03:44:07.797159 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.797167 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:44:07.797174 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:44:07.797181 | orchestrator | 2026-02-05 03:44:07.797188 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-05 03:44:07.797195 | orchestrator | Thursday 05 February 2026 03:43:55 +0000 (0:00:00.316) 0:00:04.134 ***** 2026-02-05 03:44:07.797203 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797210 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:44:07.797217 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:44:07.797224 | orchestrator | 2026-02-05 03:44:07.797232 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-05 03:44:07.797239 | orchestrator | Thursday 05 February 2026 03:43:55 +0000 (0:00:00.578) 0:00:04.713 ***** 2026-02-05 03:44:07.797246 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797253 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:44:07.797260 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:44:07.797268 | orchestrator | 2026-02-05 03:44:07.797275 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-05 03:44:07.797282 | orchestrator | Thursday 05 February 2026 03:43:56 +0000 (0:00:00.423) 0:00:05.137 ***** 2026-02-05 03:44:07.797290 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.797316 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:44:07.797324 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:44:07.797331 | orchestrator | 2026-02-05 03:44:07.797338 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-05 03:44:07.797345 | orchestrator | Thursday 05 February 2026 03:43:56 +0000 (0:00:00.291) 0:00:05.428 ***** 2026-02-05 03:44:07.797352 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797360 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:44:07.797367 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:44:07.797374 | orchestrator | 2026-02-05 03:44:07.797381 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-05 03:44:07.797389 | orchestrator | Thursday 05 February 2026 03:43:57 +0000 (0:00:00.507) 0:00:05.936 ***** 2026-02-05 03:44:07.797396 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.797403 | orchestrator | 2026-02-05 03:44:07.797410 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-05 03:44:07.797418 | orchestrator | Thursday 05 February 2026 03:43:57 +0000 (0:00:00.253) 0:00:06.189 ***** 2026-02-05 03:44:07.797425 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.797432 | orchestrator | 2026-02-05 03:44:07.797439 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-05 03:44:07.797447 | orchestrator | Thursday 05 February 2026 03:43:57 +0000 (0:00:00.251) 0:00:06.440 ***** 2026-02-05 03:44:07.797454 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.797461 | orchestrator | 2026-02-05 03:44:07.797468 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:44:07.797476 | orchestrator | Thursday 05 February 2026 03:43:57 +0000 (0:00:00.253) 0:00:06.693 ***** 2026-02-05 03:44:07.797483 | orchestrator | 2026-02-05 03:44:07.797490 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:44:07.797497 | orchestrator | Thursday 05 February 2026 03:43:58 +0000 (0:00:00.090) 0:00:06.784 ***** 2026-02-05 03:44:07.797504 | orchestrator | 2026-02-05 03:44:07.797511 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:44:07.797518 | orchestrator | Thursday 05 February 2026 03:43:58 +0000 (0:00:00.075) 0:00:06.859 ***** 2026-02-05 03:44:07.797526 | orchestrator | 2026-02-05 03:44:07.797533 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-05 03:44:07.797540 | orchestrator | Thursday 05 February 2026 03:43:58 +0000 (0:00:00.075) 0:00:06.935 ***** 2026-02-05 03:44:07.797547 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.797554 | orchestrator | 2026-02-05 03:44:07.797561 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-05 03:44:07.797583 | orchestrator | Thursday 05 February 2026 03:43:58 +0000 (0:00:00.248) 0:00:07.184 ***** 2026-02-05 03:44:07.797591 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.797598 | orchestrator | 2026-02-05 03:44:07.797621 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-05 03:44:07.797629 | orchestrator | Thursday 05 February 2026 03:43:58 +0000 (0:00:00.260) 0:00:07.445 ***** 2026-02-05 03:44:07.797636 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797644 | orchestrator | 2026-02-05 03:44:07.797651 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-05 03:44:07.797658 | orchestrator | Thursday 05 February 2026 03:43:58 +0000 (0:00:00.120) 0:00:07.565 ***** 2026-02-05 03:44:07.797666 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:44:07.797676 | orchestrator | 2026-02-05 03:44:07.797684 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-05 03:44:07.797691 | orchestrator | Thursday 05 February 2026 03:44:00 +0000 (0:00:01.766) 0:00:09.332 ***** 2026-02-05 03:44:07.797698 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797706 | orchestrator | 2026-02-05 03:44:07.797713 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-05 03:44:07.797720 | orchestrator | Thursday 05 February 2026 03:44:01 +0000 (0:00:00.516) 0:00:09.848 ***** 2026-02-05 03:44:07.797733 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.797740 | orchestrator | 2026-02-05 03:44:07.797748 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-05 03:44:07.797755 | orchestrator | Thursday 05 February 2026 03:44:01 +0000 (0:00:00.130) 0:00:09.979 ***** 2026-02-05 03:44:07.797763 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797802 | orchestrator | 2026-02-05 03:44:07.797815 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-05 03:44:07.797827 | orchestrator | Thursday 05 February 2026 03:44:01 +0000 (0:00:00.325) 0:00:10.304 ***** 2026-02-05 03:44:07.797838 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797849 | orchestrator | 2026-02-05 03:44:07.797856 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-05 03:44:07.797863 | orchestrator | Thursday 05 February 2026 03:44:01 +0000 (0:00:00.293) 0:00:10.597 ***** 2026-02-05 03:44:07.797871 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.797878 | orchestrator | 2026-02-05 03:44:07.797885 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-05 03:44:07.797893 | orchestrator | Thursday 05 February 2026 03:44:01 +0000 (0:00:00.130) 0:00:10.728 ***** 2026-02-05 03:44:07.797900 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797907 | orchestrator | 2026-02-05 03:44:07.797914 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-05 03:44:07.797922 | orchestrator | Thursday 05 February 2026 03:44:02 +0000 (0:00:00.134) 0:00:10.863 ***** 2026-02-05 03:44:07.797929 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797936 | orchestrator | 2026-02-05 03:44:07.797943 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-05 03:44:07.797951 | orchestrator | Thursday 05 February 2026 03:44:02 +0000 (0:00:00.130) 0:00:10.993 ***** 2026-02-05 03:44:07.797958 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:44:07.797965 | orchestrator | 2026-02-05 03:44:07.797972 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-05 03:44:07.797980 | orchestrator | Thursday 05 February 2026 03:44:03 +0000 (0:00:01.394) 0:00:12.388 ***** 2026-02-05 03:44:07.797987 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.797994 | orchestrator | 2026-02-05 03:44:07.798001 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-05 03:44:07.798008 | orchestrator | Thursday 05 February 2026 03:44:03 +0000 (0:00:00.321) 0:00:12.709 ***** 2026-02-05 03:44:07.798057 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.798072 | orchestrator | 2026-02-05 03:44:07.798086 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-05 03:44:07.798099 | orchestrator | Thursday 05 February 2026 03:44:04 +0000 (0:00:00.141) 0:00:12.850 ***** 2026-02-05 03:44:07.798110 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:07.798117 | orchestrator | 2026-02-05 03:44:07.798125 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-05 03:44:07.798132 | orchestrator | Thursday 05 February 2026 03:44:04 +0000 (0:00:00.151) 0:00:13.002 ***** 2026-02-05 03:44:07.798139 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.798146 | orchestrator | 2026-02-05 03:44:07.798154 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-05 03:44:07.798161 | orchestrator | Thursday 05 February 2026 03:44:04 +0000 (0:00:00.146) 0:00:13.148 ***** 2026-02-05 03:44:07.798173 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.798180 | orchestrator | 2026-02-05 03:44:07.798187 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-05 03:44:07.798195 | orchestrator | Thursday 05 February 2026 03:44:04 +0000 (0:00:00.303) 0:00:13.451 ***** 2026-02-05 03:44:07.798202 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:07.798209 | orchestrator | 2026-02-05 03:44:07.798216 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-05 03:44:07.798224 | orchestrator | Thursday 05 February 2026 03:44:04 +0000 (0:00:00.249) 0:00:13.701 ***** 2026-02-05 03:44:07.798237 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:07.798244 | orchestrator | 2026-02-05 03:44:07.798252 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-05 03:44:07.798259 | orchestrator | Thursday 05 February 2026 03:44:05 +0000 (0:00:00.269) 0:00:13.970 ***** 2026-02-05 03:44:07.798266 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:07.798273 | orchestrator | 2026-02-05 03:44:07.798280 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-05 03:44:07.798288 | orchestrator | Thursday 05 February 2026 03:44:07 +0000 (0:00:01.803) 0:00:15.773 ***** 2026-02-05 03:44:07.798295 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:07.798302 | orchestrator | 2026-02-05 03:44:07.798309 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-05 03:44:07.798316 | orchestrator | Thursday 05 February 2026 03:44:07 +0000 (0:00:00.271) 0:00:16.045 ***** 2026-02-05 03:44:07.798323 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:07.798331 | orchestrator | 2026-02-05 03:44:07.798344 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:44:10.518216 | orchestrator | Thursday 05 February 2026 03:44:07 +0000 (0:00:00.260) 0:00:16.305 ***** 2026-02-05 03:44:10.518298 | orchestrator | 2026-02-05 03:44:10.518308 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:44:10.518315 | orchestrator | Thursday 05 February 2026 03:44:07 +0000 (0:00:00.088) 0:00:16.394 ***** 2026-02-05 03:44:10.518322 | orchestrator | 2026-02-05 03:44:10.518329 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:44:10.518336 | orchestrator | Thursday 05 February 2026 03:44:07 +0000 (0:00:00.069) 0:00:16.464 ***** 2026-02-05 03:44:10.518342 | orchestrator | 2026-02-05 03:44:10.518348 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-05 03:44:10.518355 | orchestrator | Thursday 05 February 2026 03:44:07 +0000 (0:00:00.072) 0:00:16.537 ***** 2026-02-05 03:44:10.518362 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:10.518368 | orchestrator | 2026-02-05 03:44:10.518375 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-05 03:44:10.518381 | orchestrator | Thursday 05 February 2026 03:44:09 +0000 (0:00:01.550) 0:00:18.087 ***** 2026-02-05 03:44:10.518387 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-05 03:44:10.518394 | orchestrator |  "msg": [ 2026-02-05 03:44:10.518401 | orchestrator |  "Validator run completed.", 2026-02-05 03:44:10.518408 | orchestrator |  "You can find the report file here:", 2026-02-05 03:44:10.518415 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-05T03:43:52+00:00-report.json", 2026-02-05 03:44:10.518422 | orchestrator |  "on the following host:", 2026-02-05 03:44:10.518428 | orchestrator |  "testbed-manager" 2026-02-05 03:44:10.518435 | orchestrator |  ] 2026-02-05 03:44:10.518441 | orchestrator | } 2026-02-05 03:44:10.518448 | orchestrator | 2026-02-05 03:44:10.518454 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:44:10.518462 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-05 03:44:10.518470 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:44:10.518476 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:44:10.518483 | orchestrator | 2026-02-05 03:44:10.518489 | orchestrator | 2026-02-05 03:44:10.518495 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:44:10.518502 | orchestrator | Thursday 05 February 2026 03:44:10 +0000 (0:00:00.828) 0:00:18.916 ***** 2026-02-05 03:44:10.518529 | orchestrator | =============================================================================== 2026-02-05 03:44:10.518536 | orchestrator | Aggregate test results step one ----------------------------------------- 1.80s 2026-02-05 03:44:10.518542 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.77s 2026-02-05 03:44:10.518548 | orchestrator | Write report file ------------------------------------------------------- 1.55s 2026-02-05 03:44:10.518555 | orchestrator | Gather status data ------------------------------------------------------ 1.39s 2026-02-05 03:44:10.518561 | orchestrator | Get container info ------------------------------------------------------ 1.06s 2026-02-05 03:44:10.518567 | orchestrator | Create report output directory ------------------------------------------ 1.05s 2026-02-05 03:44:10.518573 | orchestrator | Print report file information ------------------------------------------- 0.83s 2026-02-05 03:44:10.518579 | orchestrator | Get timestamp for report file ------------------------------------------- 0.83s 2026-02-05 03:44:10.518585 | orchestrator | Set test result to passed if container is existing ---------------------- 0.58s 2026-02-05 03:44:10.518591 | orchestrator | Set quorum test data ---------------------------------------------------- 0.52s 2026-02-05 03:44:10.518609 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.51s 2026-02-05 03:44:10.518616 | orchestrator | Prepare test data ------------------------------------------------------- 0.42s 2026-02-05 03:44:10.518622 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-02-05 03:44:10.518628 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2026-02-05 03:44:10.518634 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-02-05 03:44:10.518640 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-02-05 03:44:10.518646 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.30s 2026-02-05 03:44:10.518652 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2026-02-05 03:44:10.518659 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2026-02-05 03:44:10.518665 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-02-05 03:44:10.841917 | orchestrator | + osism validate ceph-mgrs 2026-02-05 03:44:42.378533 | orchestrator | 2026-02-05 03:44:42.378668 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-05 03:44:42.378692 | orchestrator | 2026-02-05 03:44:42.378710 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-05 03:44:42.378728 | orchestrator | Thursday 05 February 2026 03:44:27 +0000 (0:00:00.434) 0:00:00.434 ***** 2026-02-05 03:44:42.378746 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:42.378762 | orchestrator | 2026-02-05 03:44:42.378803 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-05 03:44:42.378816 | orchestrator | Thursday 05 February 2026 03:44:28 +0000 (0:00:00.852) 0:00:01.287 ***** 2026-02-05 03:44:42.378826 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:42.378836 | orchestrator | 2026-02-05 03:44:42.378846 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-05 03:44:42.378856 | orchestrator | Thursday 05 February 2026 03:44:29 +0000 (0:00:01.012) 0:00:02.300 ***** 2026-02-05 03:44:42.378866 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:42.378877 | orchestrator | 2026-02-05 03:44:42.378887 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-05 03:44:42.378897 | orchestrator | Thursday 05 February 2026 03:44:29 +0000 (0:00:00.142) 0:00:02.443 ***** 2026-02-05 03:44:42.378906 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:42.378916 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:44:42.378925 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:44:42.378935 | orchestrator | 2026-02-05 03:44:42.378945 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-05 03:44:42.378954 | orchestrator | Thursday 05 February 2026 03:44:29 +0000 (0:00:00.308) 0:00:02.752 ***** 2026-02-05 03:44:42.378986 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:44:42.378996 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:44:42.379006 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:42.379015 | orchestrator | 2026-02-05 03:44:42.379025 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-05 03:44:42.379034 | orchestrator | Thursday 05 February 2026 03:44:31 +0000 (0:00:01.054) 0:00:03.806 ***** 2026-02-05 03:44:42.379047 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:42.379058 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:44:42.379070 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:44:42.379081 | orchestrator | 2026-02-05 03:44:42.379092 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-05 03:44:42.379103 | orchestrator | Thursday 05 February 2026 03:44:31 +0000 (0:00:00.344) 0:00:04.150 ***** 2026-02-05 03:44:42.379115 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:42.379126 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:44:42.379137 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:44:42.379149 | orchestrator | 2026-02-05 03:44:42.379159 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-05 03:44:42.379171 | orchestrator | Thursday 05 February 2026 03:44:31 +0000 (0:00:00.508) 0:00:04.658 ***** 2026-02-05 03:44:42.379182 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:42.379194 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:44:42.379204 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:44:42.379215 | orchestrator | 2026-02-05 03:44:42.379227 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-05 03:44:42.379237 | orchestrator | Thursday 05 February 2026 03:44:32 +0000 (0:00:00.303) 0:00:04.962 ***** 2026-02-05 03:44:42.379248 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:42.379259 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:44:42.379270 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:44:42.379281 | orchestrator | 2026-02-05 03:44:42.379292 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-05 03:44:42.379303 | orchestrator | Thursday 05 February 2026 03:44:32 +0000 (0:00:00.313) 0:00:05.275 ***** 2026-02-05 03:44:42.379313 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:42.379324 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:44:42.379335 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:44:42.379346 | orchestrator | 2026-02-05 03:44:42.379356 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-05 03:44:42.379367 | orchestrator | Thursday 05 February 2026 03:44:32 +0000 (0:00:00.484) 0:00:05.760 ***** 2026-02-05 03:44:42.379379 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:42.379391 | orchestrator | 2026-02-05 03:44:42.379402 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-05 03:44:42.379411 | orchestrator | Thursday 05 February 2026 03:44:33 +0000 (0:00:00.257) 0:00:06.017 ***** 2026-02-05 03:44:42.379421 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:42.379430 | orchestrator | 2026-02-05 03:44:42.379440 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-05 03:44:42.379449 | orchestrator | Thursday 05 February 2026 03:44:33 +0000 (0:00:00.263) 0:00:06.280 ***** 2026-02-05 03:44:42.379458 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:42.379468 | orchestrator | 2026-02-05 03:44:42.379477 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:44:42.379487 | orchestrator | Thursday 05 February 2026 03:44:33 +0000 (0:00:00.250) 0:00:06.531 ***** 2026-02-05 03:44:42.379497 | orchestrator | 2026-02-05 03:44:42.379506 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:44:42.379516 | orchestrator | Thursday 05 February 2026 03:44:33 +0000 (0:00:00.119) 0:00:06.650 ***** 2026-02-05 03:44:42.379526 | orchestrator | 2026-02-05 03:44:42.379542 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:44:42.379559 | orchestrator | Thursday 05 February 2026 03:44:33 +0000 (0:00:00.099) 0:00:06.750 ***** 2026-02-05 03:44:42.379586 | orchestrator | 2026-02-05 03:44:42.379601 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-05 03:44:42.379617 | orchestrator | Thursday 05 February 2026 03:44:34 +0000 (0:00:00.088) 0:00:06.838 ***** 2026-02-05 03:44:42.379631 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:42.379646 | orchestrator | 2026-02-05 03:44:42.379662 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-05 03:44:42.379679 | orchestrator | Thursday 05 February 2026 03:44:34 +0000 (0:00:00.264) 0:00:07.102 ***** 2026-02-05 03:44:42.379697 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:42.379714 | orchestrator | 2026-02-05 03:44:42.379755 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-05 03:44:42.379768 | orchestrator | Thursday 05 February 2026 03:44:34 +0000 (0:00:00.292) 0:00:07.395 ***** 2026-02-05 03:44:42.379847 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:42.379858 | orchestrator | 2026-02-05 03:44:42.379868 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-05 03:44:42.379878 | orchestrator | Thursday 05 February 2026 03:44:34 +0000 (0:00:00.115) 0:00:07.510 ***** 2026-02-05 03:44:42.379887 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:44:42.379897 | orchestrator | 2026-02-05 03:44:42.379906 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-05 03:44:42.379916 | orchestrator | Thursday 05 February 2026 03:44:36 +0000 (0:00:02.101) 0:00:09.611 ***** 2026-02-05 03:44:42.379926 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:42.379935 | orchestrator | 2026-02-05 03:44:42.379963 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-05 03:44:42.379973 | orchestrator | Thursday 05 February 2026 03:44:37 +0000 (0:00:00.437) 0:00:10.048 ***** 2026-02-05 03:44:42.379983 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:42.379992 | orchestrator | 2026-02-05 03:44:42.380002 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-05 03:44:42.380018 | orchestrator | Thursday 05 February 2026 03:44:37 +0000 (0:00:00.311) 0:00:10.360 ***** 2026-02-05 03:44:42.380038 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:42.380060 | orchestrator | 2026-02-05 03:44:42.380076 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-05 03:44:42.380092 | orchestrator | Thursday 05 February 2026 03:44:37 +0000 (0:00:00.137) 0:00:10.497 ***** 2026-02-05 03:44:42.380107 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:44:42.380124 | orchestrator | 2026-02-05 03:44:42.380140 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-05 03:44:42.380156 | orchestrator | Thursday 05 February 2026 03:44:37 +0000 (0:00:00.153) 0:00:10.651 ***** 2026-02-05 03:44:42.380173 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:42.380186 | orchestrator | 2026-02-05 03:44:42.380195 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-05 03:44:42.380205 | orchestrator | Thursday 05 February 2026 03:44:38 +0000 (0:00:00.252) 0:00:10.903 ***** 2026-02-05 03:44:42.380215 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:44:42.380224 | orchestrator | 2026-02-05 03:44:42.380234 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-05 03:44:42.380244 | orchestrator | Thursday 05 February 2026 03:44:38 +0000 (0:00:00.245) 0:00:11.149 ***** 2026-02-05 03:44:42.380253 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:42.380263 | orchestrator | 2026-02-05 03:44:42.380273 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-05 03:44:42.380283 | orchestrator | Thursday 05 February 2026 03:44:39 +0000 (0:00:01.285) 0:00:12.435 ***** 2026-02-05 03:44:42.380292 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:42.380302 | orchestrator | 2026-02-05 03:44:42.380311 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-05 03:44:42.380321 | orchestrator | Thursday 05 February 2026 03:44:39 +0000 (0:00:00.264) 0:00:12.699 ***** 2026-02-05 03:44:42.380342 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:42.380352 | orchestrator | 2026-02-05 03:44:42.380361 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:44:42.380371 | orchestrator | Thursday 05 February 2026 03:44:40 +0000 (0:00:00.250) 0:00:12.950 ***** 2026-02-05 03:44:42.380380 | orchestrator | 2026-02-05 03:44:42.380390 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:44:42.380399 | orchestrator | Thursday 05 February 2026 03:44:40 +0000 (0:00:00.098) 0:00:13.049 ***** 2026-02-05 03:44:42.380409 | orchestrator | 2026-02-05 03:44:42.380418 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:44:42.380428 | orchestrator | Thursday 05 February 2026 03:44:40 +0000 (0:00:00.073) 0:00:13.122 ***** 2026-02-05 03:44:42.380437 | orchestrator | 2026-02-05 03:44:42.380447 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-05 03:44:42.380456 | orchestrator | Thursday 05 February 2026 03:44:40 +0000 (0:00:00.279) 0:00:13.402 ***** 2026-02-05 03:44:42.380466 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-05 03:44:42.380476 | orchestrator | 2026-02-05 03:44:42.380485 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-05 03:44:42.380495 | orchestrator | Thursday 05 February 2026 03:44:41 +0000 (0:00:01.300) 0:00:14.703 ***** 2026-02-05 03:44:42.380505 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-05 03:44:42.380514 | orchestrator |  "msg": [ 2026-02-05 03:44:42.380524 | orchestrator |  "Validator run completed.", 2026-02-05 03:44:42.380540 | orchestrator |  "You can find the report file here:", 2026-02-05 03:44:42.380550 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-05T03:44:28+00:00-report.json", 2026-02-05 03:44:42.380561 | orchestrator |  "on the following host:", 2026-02-05 03:44:42.380571 | orchestrator |  "testbed-manager" 2026-02-05 03:44:42.380580 | orchestrator |  ] 2026-02-05 03:44:42.380590 | orchestrator | } 2026-02-05 03:44:42.380600 | orchestrator | 2026-02-05 03:44:42.380610 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:44:42.380621 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-05 03:44:42.380632 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:44:42.380653 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:44:42.699594 | orchestrator | 2026-02-05 03:44:42.699690 | orchestrator | 2026-02-05 03:44:42.699705 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:44:42.699718 | orchestrator | Thursday 05 February 2026 03:44:42 +0000 (0:00:00.424) 0:00:15.128 ***** 2026-02-05 03:44:42.699728 | orchestrator | =============================================================================== 2026-02-05 03:44:42.699738 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.10s 2026-02-05 03:44:42.699769 | orchestrator | Write report file ------------------------------------------------------- 1.30s 2026-02-05 03:44:42.699863 | orchestrator | Aggregate test results step one ----------------------------------------- 1.29s 2026-02-05 03:44:42.699874 | orchestrator | Get container info ------------------------------------------------------ 1.05s 2026-02-05 03:44:42.699884 | orchestrator | Create report output directory ------------------------------------------ 1.01s 2026-02-05 03:44:42.699893 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-02-05 03:44:42.699903 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2026-02-05 03:44:42.699913 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.48s 2026-02-05 03:44:42.699949 | orchestrator | Flush handlers ---------------------------------------------------------- 0.45s 2026-02-05 03:44:42.699959 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.44s 2026-02-05 03:44:42.699968 | orchestrator | Print report file information ------------------------------------------- 0.42s 2026-02-05 03:44:42.699978 | orchestrator | Set test result to failed if container is missing ----------------------- 0.34s 2026-02-05 03:44:42.699988 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2026-02-05 03:44:42.699997 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.31s 2026-02-05 03:44:42.700005 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-02-05 03:44:42.700015 | orchestrator | Flush handlers ---------------------------------------------------------- 0.31s 2026-02-05 03:44:42.700025 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-02-05 03:44:42.700033 | orchestrator | Fail due to missing containers ------------------------------------------ 0.29s 2026-02-05 03:44:42.700043 | orchestrator | Print report file information ------------------------------------------- 0.26s 2026-02-05 03:44:42.700052 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-02-05 03:44:43.026222 | orchestrator | + osism validate ceph-osds 2026-02-05 03:45:04.333804 | orchestrator | 2026-02-05 03:45:04.333920 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-05 03:45:04.333945 | orchestrator | 2026-02-05 03:45:04.333963 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-05 03:45:04.333978 | orchestrator | Thursday 05 February 2026 03:44:59 +0000 (0:00:00.438) 0:00:00.438 ***** 2026-02-05 03:45:04.333992 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 03:45:04.334005 | orchestrator | 2026-02-05 03:45:04.334077 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 03:45:04.334092 | orchestrator | Thursday 05 February 2026 03:45:00 +0000 (0:00:00.867) 0:00:01.306 ***** 2026-02-05 03:45:04.334105 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 03:45:04.334117 | orchestrator | 2026-02-05 03:45:04.334128 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-05 03:45:04.334140 | orchestrator | Thursday 05 February 2026 03:45:01 +0000 (0:00:00.509) 0:00:01.815 ***** 2026-02-05 03:45:04.334152 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 03:45:04.334164 | orchestrator | 2026-02-05 03:45:04.334175 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-05 03:45:04.334186 | orchestrator | Thursday 05 February 2026 03:45:01 +0000 (0:00:00.709) 0:00:02.524 ***** 2026-02-05 03:45:04.334198 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:04.334212 | orchestrator | 2026-02-05 03:45:04.334224 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-05 03:45:04.334235 | orchestrator | Thursday 05 February 2026 03:45:01 +0000 (0:00:00.135) 0:00:02.660 ***** 2026-02-05 03:45:04.334247 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:04.334258 | orchestrator | 2026-02-05 03:45:04.334270 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-05 03:45:04.334282 | orchestrator | Thursday 05 February 2026 03:45:02 +0000 (0:00:00.143) 0:00:02.804 ***** 2026-02-05 03:45:04.334294 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:04.334305 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:45:04.334333 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:45:04.334345 | orchestrator | 2026-02-05 03:45:04.334359 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-05 03:45:04.334371 | orchestrator | Thursday 05 February 2026 03:45:02 +0000 (0:00:00.313) 0:00:03.117 ***** 2026-02-05 03:45:04.334385 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:04.334399 | orchestrator | 2026-02-05 03:45:04.334414 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-05 03:45:04.334454 | orchestrator | Thursday 05 February 2026 03:45:02 +0000 (0:00:00.155) 0:00:03.272 ***** 2026-02-05 03:45:04.334469 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:04.334484 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:04.334497 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:04.334511 | orchestrator | 2026-02-05 03:45:04.334526 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-05 03:45:04.334540 | orchestrator | Thursday 05 February 2026 03:45:02 +0000 (0:00:00.315) 0:00:03.588 ***** 2026-02-05 03:45:04.334554 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:04.334570 | orchestrator | 2026-02-05 03:45:04.334583 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-05 03:45:04.334597 | orchestrator | Thursday 05 February 2026 03:45:03 +0000 (0:00:00.816) 0:00:04.404 ***** 2026-02-05 03:45:04.334612 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:04.334627 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:04.334640 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:04.334653 | orchestrator | 2026-02-05 03:45:04.334669 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-05 03:45:04.334684 | orchestrator | Thursday 05 February 2026 03:45:04 +0000 (0:00:00.313) 0:00:04.717 ***** 2026-02-05 03:45:04.334701 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dc9d9d09689d5456d821f94522f0be0b7583048f3e98f025adfafb8e17d32c3c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-05 03:45:04.334719 | orchestrator | skipping: [testbed-node-3] => (item={'id': '47b3697bf9fde59f41e5a72e0fb120f01cb94b1dbedd1dd41688faf8fdd1e3cc', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-05 03:45:04.334733 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b53d4b45d81330ee8d40337da19c64c342aac759f08b98923db7136c2a5e02d5', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-05 03:45:04.334747 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3afc8ca0e35d8a318294188fe9999b41a6650f254808af1a3abf41428c5e2303', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-05 03:45:04.334760 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eb618a5aa76245a1b8e867abfdbfbdfe64b4ee09247b4001b16894b3e588679a', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-05 03:45:04.334830 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6c51b2ae081e82da8a3eb8950bc092ed0a8963adacececab496ad771c4458703', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-05 03:45:04.334845 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0857a07a1dc6dea7d9cda6c51c718e1bde021dbcf31ae99965ac85b1b5f98db2', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-05 03:45:04.334860 | orchestrator | skipping: [testbed-node-3] => (item={'id': '647e52150e0454d494bc5129801e646408ce225fd582c7c78b62ecf94c5b6cc6', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-02-05 03:45:04.334874 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7c2fc5b012b2f687efc70e4879324f88c03109cfc2ccbe7c5732dd323a35e0a8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-05 03:45:04.334901 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1badb6fef35e1596bf53b2dde8a99dc3ee6b4e4f912e2f5514dfe0bb9bbbe4ec', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-05 03:45:04.334917 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd009b40f2a86ad4a358657fa333af76c5c87e699a20e4f9bf93527fbab8c9bf7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-05 03:45:04.334933 | orchestrator | ok: [testbed-node-3] => (item={'id': '41a43734abad08a5744e3f559dfd01014471e71eef58f30a7a6451412fdc4682', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-05 03:45:04.334949 | orchestrator | ok: [testbed-node-3] => (item={'id': 'b1e3e195c5f351e336a9401db22e8e447e69d79f54477524c50d8ae5d8588b39', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-05 03:45:04.334964 | orchestrator | skipping: [testbed-node-3] => (item={'id': '830b94535884a37d7e0a11d717771137623589e4d380737ac1590cf7d89f4da1', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-05 03:45:04.334979 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4d5746fc5548cbd68b6a6bd93157082ab98465d49d2c95edf65ea1344a3e7863', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-05 03:45:04.334994 | orchestrator | skipping: [testbed-node-3] => (item={'id': '40496c54529d10ae8aeffb7a19a9c3093d83fc25b6d091363ed14139303ac79f', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-05 03:45:04.335008 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7c169426292ddaa1af6866432665fa4ad39a36572f67a6b67cf367764f897d00', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-05 03:45:04.335023 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9cca99ca8b987bf61bc62e1b1c16e855491f888a30d344ab7990a146edf03814', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-05 03:45:04.335038 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b24160151688afa192876586b22d39b4a0cc7910c92579900201c1b40892b495', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-05 03:45:04.335053 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7a90346ccd26d308182e60a7726a9d4bd3d37d812aa69d66d27bc9097abcaaaa', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-05 03:45:04.335076 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fdac38376043e3b2641228ed4b7e18bbc7823bd727f5b2e56001ac7e47ae7d10', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-05 03:45:04.586133 | orchestrator | skipping: [testbed-node-4] => (item={'id': '986fa2236f5ae526ce7bf89428b4d9a8c76e8b26e92ca73685a2d6afbb458b8b', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-05 03:45:04.586278 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4cb9354502d27d8e15ff8a8f67bcb9a98587c20a00f430efcc2e9036f6bc8a17', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-05 03:45:04.586314 | orchestrator | skipping: [testbed-node-4] => (item={'id': '91c977ef8cb93a1598841c4612d8eb45bed2058569402706fef83cbd11c41215', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-05 03:45:04.586326 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'efd31e19bdd274543653e6e98617c94367aa397677865cae05ee07f24788f563', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-05 03:45:04.586338 | orchestrator | skipping: [testbed-node-4] => (item={'id': '087c07e5daecbb78473bdf8a3a751a3023407b92a05b1df31e8a7e0c3de42e41', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-05 03:45:04.586347 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f476d0386c4e3fa38fe6b6e0da63a465eca5e1dbe4e42da31b5155f1250eb1a6', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-02-05 03:45:04.586355 | orchestrator | skipping: [testbed-node-4] => (item={'id': '642ad2763015ccc948c89367fc586a334c2e0ea26d30e9077a3f0465fbcd346c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-05 03:45:04.586365 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e185c079195c79dd6c08d2e5095dfd3572eafdc191e4483047077fe6990748eb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-05 03:45:04.586373 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3a66b1c5c0d47d2d6e358da9469eddfd2ac184a4b7ec62074fb1bc7587b8df4a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-05 03:45:04.586383 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ce48eef4129004b1dfa48b09401f11062e583c6ac91fae82ea2a7094e31ed55e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-05 03:45:04.586392 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ea8d604056c9f8d79f02e0030851f0441f8aae0c74a375e4fa3e9b2d2ae2d300', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-05 03:45:04.586400 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ee0a8cb33ed4df847cf686aa1c380c5f2838d1a5bfe842a70051f8d29eed5305', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-05 03:45:04.586408 | orchestrator | skipping: [testbed-node-4] => (item={'id': '660aa01be90b79d95e8ab48f859465f34058b1c83934d6294f9fea5ef2ce00ac', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-05 03:45:04.586417 | orchestrator | skipping: [testbed-node-4] => (item={'id': '77d87b87d8ca01014d22673b7e7e2cc6ce7c935f32be014014e7448b75b1bb58', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-05 03:45:04.586445 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ab405c8fc7a21f3eae73c441bde24a201df8207141ffd54553037ff972ee37c8', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-05 03:45:04.586468 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7f41e0922f899afebea247e0cf0ea301b0ff8f551c4aaa6f323bf492270ebeba', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-05 03:45:04.586482 | orchestrator | skipping: [testbed-node-4] => (item={'id': '58c8099c82c3b709eb7db1bf10245e40688fc724b8e8c2e9dc49d9ac576ae928', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-05 03:45:04.586496 | orchestrator | skipping: [testbed-node-5] => (item={'id': '56a71d199cc80f4a9f1841394785c630f2e39ea13c0756d7feb4fb5104b8f276', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-05 03:45:04.586509 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3e2bb8913af064e6b1a9857dfcdb4b5cf3c90d88e14438f6a52c94e51639a6b3', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-05 03:45:04.586527 | orchestrator | skipping: [testbed-node-5] => (item={'id': '156e1cd508e7082084c8f41eaa4e0084abea2a2cb83af32e9ff4eddfdfc84841', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-05 03:45:04.586541 | orchestrator | skipping: [testbed-node-5] => (item={'id': '79b5ae66122871c5b423538215201564eb7fa80e3ac90c12783616fea7fecfa2', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-05 03:45:04.586555 | orchestrator | skipping: [testbed-node-5] => (item={'id': '26064143c737e62bd3129c01cdaa774021ff510666d43e2ebea46618b605ce4d', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-05 03:45:04.586568 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4020386ac2d4c20e60b760c1df27fb778691ac7328c150efb046f750bb4c6861', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-05 03:45:04.586581 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4c35900cc1215cfcce9bb342b1b99693e047de55ba4ae123bc8a047f73235704', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-05 03:45:04.586594 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8148825c97fe517c869fe4be248c63261e57942401e2170648d9a325dd186944', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-02-05 03:45:04.586608 | orchestrator | skipping: [testbed-node-5] => (item={'id': '865287a2c06ae56f16dbe3f6aa372cf5f908e210930b3a5abe3b86de96e665f2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-05 03:45:04.586622 | orchestrator | skipping: [testbed-node-5] => (item={'id': '892ecc8364348db8ab6ad92c910e337704307ab96d606ab66dc04a555a2c61f9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-05 03:45:04.586636 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f2fdde64a2691c889f7afbfdbd770fdfb4a44dfebb8689244399aa74b27273b9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-05 03:45:04.586657 | orchestrator | ok: [testbed-node-5] => (item={'id': '75053638b53f9ebe0f6ab6a202b0e2be24d39aefd3ee8f9a3e6b83458fdbbedf', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-05 03:45:04.586682 | orchestrator | ok: [testbed-node-5] => (item={'id': '6d1515ce728c537687e56ecddd6cfaa9e548dc1d4f34f4d2a5ccee2a52327989', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-05 03:45:16.014074 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd9866a10f7303056c104d7281d5280ebbc43189ba6ade9fd0d281bc875d3f0ba', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-05 03:45:16.014192 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2d336215c4bb69972f212829a2f0b16c4d53e48133e0ac3974e521f618fb05f5', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-05 03:45:16.014213 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3c9baf12453588ddb005a497cd53c1517d88daaa86095d4738e8cb57d0dd6724', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-05 03:45:16.014227 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2fd7d8a312df71266d2a3f77c06b61cd50d56f2a1bc77e93eb18bbf67acd5ff8', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-05 03:45:16.014260 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3468de60e7277b7b8d9a2d381a23b2eaff6c66da29bc671b9cc4878ec6257706', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-05 03:45:16.014275 | orchestrator | skipping: [testbed-node-5] => (item={'id': '057d505b54a596cbda9663e21ab0b8b3b3fec169080fc5b009d40edfa35a733c', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-05 03:45:16.014289 | orchestrator | 2026-02-05 03:45:16.014304 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-05 03:45:16.014320 | orchestrator | Thursday 05 February 2026 03:45:04 +0000 (0:00:00.531) 0:00:05.249 ***** 2026-02-05 03:45:16.014332 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:16.014348 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:16.014361 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:16.014373 | orchestrator | 2026-02-05 03:45:16.014385 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-05 03:45:16.014397 | orchestrator | Thursday 05 February 2026 03:45:04 +0000 (0:00:00.309) 0:00:05.558 ***** 2026-02-05 03:45:16.014409 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:16.014421 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:45:16.014432 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:45:16.014442 | orchestrator | 2026-02-05 03:45:16.014454 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-05 03:45:16.014465 | orchestrator | Thursday 05 February 2026 03:45:05 +0000 (0:00:00.510) 0:00:06.068 ***** 2026-02-05 03:45:16.014476 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:16.014487 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:16.014498 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:16.014511 | orchestrator | 2026-02-05 03:45:16.014522 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-05 03:45:16.014534 | orchestrator | Thursday 05 February 2026 03:45:05 +0000 (0:00:00.317) 0:00:06.386 ***** 2026-02-05 03:45:16.014546 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:16.014558 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:16.014596 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:16.014610 | orchestrator | 2026-02-05 03:45:16.014622 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-05 03:45:16.014635 | orchestrator | Thursday 05 February 2026 03:45:06 +0000 (0:00:00.295) 0:00:06.681 ***** 2026-02-05 03:45:16.014649 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-05 03:45:16.014663 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-05 03:45:16.014675 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:16.014688 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-05 03:45:16.014700 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-05 03:45:16.014711 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:45:16.014723 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-05 03:45:16.014735 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-05 03:45:16.014747 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:45:16.014759 | orchestrator | 2026-02-05 03:45:16.014771 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-05 03:45:16.014817 | orchestrator | Thursday 05 February 2026 03:45:06 +0000 (0:00:00.344) 0:00:07.025 ***** 2026-02-05 03:45:16.014831 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:16.014842 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:16.014855 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:16.014865 | orchestrator | 2026-02-05 03:45:16.014878 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-05 03:45:16.014890 | orchestrator | Thursday 05 February 2026 03:45:06 +0000 (0:00:00.498) 0:00:07.524 ***** 2026-02-05 03:45:16.014903 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:16.014939 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:45:16.014949 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:45:16.014957 | orchestrator | 2026-02-05 03:45:16.014964 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-05 03:45:16.014972 | orchestrator | Thursday 05 February 2026 03:45:07 +0000 (0:00:00.298) 0:00:07.822 ***** 2026-02-05 03:45:16.014979 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:16.014986 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:45:16.014994 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:45:16.015001 | orchestrator | 2026-02-05 03:45:16.015009 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-05 03:45:16.015016 | orchestrator | Thursday 05 February 2026 03:45:07 +0000 (0:00:00.295) 0:00:08.118 ***** 2026-02-05 03:45:16.015023 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:16.015031 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:16.015038 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:16.015045 | orchestrator | 2026-02-05 03:45:16.015052 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-05 03:45:16.015060 | orchestrator | Thursday 05 February 2026 03:45:07 +0000 (0:00:00.300) 0:00:08.419 ***** 2026-02-05 03:45:16.015067 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:16.015074 | orchestrator | 2026-02-05 03:45:16.015081 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-05 03:45:16.015089 | orchestrator | Thursday 05 February 2026 03:45:08 +0000 (0:00:00.688) 0:00:09.108 ***** 2026-02-05 03:45:16.015096 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:16.015103 | orchestrator | 2026-02-05 03:45:16.015110 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-05 03:45:16.015118 | orchestrator | Thursday 05 February 2026 03:45:08 +0000 (0:00:00.251) 0:00:09.360 ***** 2026-02-05 03:45:16.015125 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:16.015133 | orchestrator | 2026-02-05 03:45:16.015140 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:45:16.015158 | orchestrator | Thursday 05 February 2026 03:45:08 +0000 (0:00:00.251) 0:00:09.611 ***** 2026-02-05 03:45:16.015166 | orchestrator | 2026-02-05 03:45:16.015173 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:45:16.015180 | orchestrator | Thursday 05 February 2026 03:45:09 +0000 (0:00:00.069) 0:00:09.680 ***** 2026-02-05 03:45:16.015188 | orchestrator | 2026-02-05 03:45:16.015195 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:45:16.015203 | orchestrator | Thursday 05 February 2026 03:45:09 +0000 (0:00:00.072) 0:00:09.753 ***** 2026-02-05 03:45:16.015210 | orchestrator | 2026-02-05 03:45:16.015217 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-05 03:45:16.015224 | orchestrator | Thursday 05 February 2026 03:45:09 +0000 (0:00:00.073) 0:00:09.827 ***** 2026-02-05 03:45:16.015232 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:16.015240 | orchestrator | 2026-02-05 03:45:16.015252 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-05 03:45:16.015264 | orchestrator | Thursday 05 February 2026 03:45:09 +0000 (0:00:00.250) 0:00:10.077 ***** 2026-02-05 03:45:16.015276 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:16.015288 | orchestrator | 2026-02-05 03:45:16.015299 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-05 03:45:16.015311 | orchestrator | Thursday 05 February 2026 03:45:09 +0000 (0:00:00.259) 0:00:10.337 ***** 2026-02-05 03:45:16.015323 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:16.015333 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:16.015346 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:16.015357 | orchestrator | 2026-02-05 03:45:16.015371 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-05 03:45:16.015383 | orchestrator | Thursday 05 February 2026 03:45:09 +0000 (0:00:00.301) 0:00:10.638 ***** 2026-02-05 03:45:16.015394 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:16.015406 | orchestrator | 2026-02-05 03:45:16.015418 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-05 03:45:16.015430 | orchestrator | Thursday 05 February 2026 03:45:10 +0000 (0:00:00.709) 0:00:11.347 ***** 2026-02-05 03:45:16.015442 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 03:45:16.015453 | orchestrator | 2026-02-05 03:45:16.015466 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-05 03:45:16.015478 | orchestrator | Thursday 05 February 2026 03:45:12 +0000 (0:00:01.657) 0:00:13.004 ***** 2026-02-05 03:45:16.015491 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:16.015503 | orchestrator | 2026-02-05 03:45:16.015515 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-05 03:45:16.015527 | orchestrator | Thursday 05 February 2026 03:45:12 +0000 (0:00:00.142) 0:00:13.147 ***** 2026-02-05 03:45:16.015535 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:16.015542 | orchestrator | 2026-02-05 03:45:16.015549 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-05 03:45:16.015556 | orchestrator | Thursday 05 February 2026 03:45:12 +0000 (0:00:00.319) 0:00:13.467 ***** 2026-02-05 03:45:16.015563 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:16.015570 | orchestrator | 2026-02-05 03:45:16.015577 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-05 03:45:16.015585 | orchestrator | Thursday 05 February 2026 03:45:12 +0000 (0:00:00.127) 0:00:13.594 ***** 2026-02-05 03:45:16.015592 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:16.015599 | orchestrator | 2026-02-05 03:45:16.015606 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-05 03:45:16.015613 | orchestrator | Thursday 05 February 2026 03:45:13 +0000 (0:00:00.154) 0:00:13.748 ***** 2026-02-05 03:45:16.015620 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:16.015627 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:16.015634 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:16.015649 | orchestrator | 2026-02-05 03:45:16.015656 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-05 03:45:16.015663 | orchestrator | Thursday 05 February 2026 03:45:13 +0000 (0:00:00.318) 0:00:14.067 ***** 2026-02-05 03:45:16.015670 | orchestrator | changed: [testbed-node-3] 2026-02-05 03:45:16.015677 | orchestrator | changed: [testbed-node-5] 2026-02-05 03:45:16.015685 | orchestrator | changed: [testbed-node-4] 2026-02-05 03:45:26.726714 | orchestrator | 2026-02-05 03:45:26.726988 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-05 03:45:26.727024 | orchestrator | Thursday 05 February 2026 03:45:15 +0000 (0:00:02.610) 0:00:16.677 ***** 2026-02-05 03:45:26.727048 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:26.727069 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:26.727087 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:26.727104 | orchestrator | 2026-02-05 03:45:26.727121 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-05 03:45:26.727139 | orchestrator | Thursday 05 February 2026 03:45:16 +0000 (0:00:00.312) 0:00:16.990 ***** 2026-02-05 03:45:26.727157 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:26.727177 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:26.727193 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:26.727212 | orchestrator | 2026-02-05 03:45:26.727230 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-05 03:45:26.727247 | orchestrator | Thursday 05 February 2026 03:45:16 +0000 (0:00:00.534) 0:00:17.525 ***** 2026-02-05 03:45:26.727262 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:26.727279 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:45:26.727295 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:45:26.727310 | orchestrator | 2026-02-05 03:45:26.727325 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-05 03:45:26.727342 | orchestrator | Thursday 05 February 2026 03:45:17 +0000 (0:00:00.325) 0:00:17.851 ***** 2026-02-05 03:45:26.727357 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:26.727371 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:26.727385 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:26.727399 | orchestrator | 2026-02-05 03:45:26.727414 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-05 03:45:26.727436 | orchestrator | Thursday 05 February 2026 03:45:17 +0000 (0:00:00.547) 0:00:18.398 ***** 2026-02-05 03:45:26.727451 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:26.727466 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:45:26.727480 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:45:26.727494 | orchestrator | 2026-02-05 03:45:26.727508 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-05 03:45:26.727524 | orchestrator | Thursday 05 February 2026 03:45:18 +0000 (0:00:00.377) 0:00:18.776 ***** 2026-02-05 03:45:26.727538 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:26.727554 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:45:26.727568 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:45:26.727581 | orchestrator | 2026-02-05 03:45:26.727596 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-05 03:45:26.727610 | orchestrator | Thursday 05 February 2026 03:45:18 +0000 (0:00:00.392) 0:00:19.169 ***** 2026-02-05 03:45:26.727626 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:26.727640 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:26.727653 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:26.727661 | orchestrator | 2026-02-05 03:45:26.727669 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-05 03:45:26.727677 | orchestrator | Thursday 05 February 2026 03:45:19 +0000 (0:00:00.520) 0:00:19.689 ***** 2026-02-05 03:45:26.727685 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:26.727693 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:26.727701 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:26.727709 | orchestrator | 2026-02-05 03:45:26.727719 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-05 03:45:26.727759 | orchestrator | Thursday 05 February 2026 03:45:19 +0000 (0:00:00.777) 0:00:20.466 ***** 2026-02-05 03:45:26.727772 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:26.727813 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:26.727828 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:26.727842 | orchestrator | 2026-02-05 03:45:26.727858 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-05 03:45:26.727873 | orchestrator | Thursday 05 February 2026 03:45:20 +0000 (0:00:00.345) 0:00:20.811 ***** 2026-02-05 03:45:26.727882 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:26.727890 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:45:26.727897 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:45:26.727905 | orchestrator | 2026-02-05 03:45:26.727913 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-05 03:45:26.727921 | orchestrator | Thursday 05 February 2026 03:45:20 +0000 (0:00:00.373) 0:00:21.185 ***** 2026-02-05 03:45:26.727928 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:45:26.727936 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:45:26.727944 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:45:26.727951 | orchestrator | 2026-02-05 03:45:26.727959 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-05 03:45:26.727967 | orchestrator | Thursday 05 February 2026 03:45:21 +0000 (0:00:00.520) 0:00:21.706 ***** 2026-02-05 03:45:26.727975 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 03:45:26.727987 | orchestrator | 2026-02-05 03:45:26.728004 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-05 03:45:26.728023 | orchestrator | Thursday 05 February 2026 03:45:21 +0000 (0:00:00.277) 0:00:21.983 ***** 2026-02-05 03:45:26.728035 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:45:26.728048 | orchestrator | 2026-02-05 03:45:26.728061 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-05 03:45:26.728073 | orchestrator | Thursday 05 February 2026 03:45:21 +0000 (0:00:00.287) 0:00:22.271 ***** 2026-02-05 03:45:26.728086 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 03:45:26.728097 | orchestrator | 2026-02-05 03:45:26.728110 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-05 03:45:26.728122 | orchestrator | Thursday 05 February 2026 03:45:23 +0000 (0:00:01.758) 0:00:24.030 ***** 2026-02-05 03:45:26.728135 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 03:45:26.728148 | orchestrator | 2026-02-05 03:45:26.728161 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-05 03:45:26.728174 | orchestrator | Thursday 05 February 2026 03:45:23 +0000 (0:00:00.261) 0:00:24.291 ***** 2026-02-05 03:45:26.728187 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 03:45:26.728202 | orchestrator | 2026-02-05 03:45:26.728244 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:45:26.728253 | orchestrator | Thursday 05 February 2026 03:45:23 +0000 (0:00:00.285) 0:00:24.576 ***** 2026-02-05 03:45:26.728261 | orchestrator | 2026-02-05 03:45:26.728269 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:45:26.728277 | orchestrator | Thursday 05 February 2026 03:45:23 +0000 (0:00:00.077) 0:00:24.654 ***** 2026-02-05 03:45:26.728285 | orchestrator | 2026-02-05 03:45:26.728293 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-05 03:45:26.728300 | orchestrator | Thursday 05 February 2026 03:45:24 +0000 (0:00:00.079) 0:00:24.733 ***** 2026-02-05 03:45:26.728308 | orchestrator | 2026-02-05 03:45:26.728316 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-05 03:45:26.728323 | orchestrator | Thursday 05 February 2026 03:45:24 +0000 (0:00:00.079) 0:00:24.813 ***** 2026-02-05 03:45:26.728331 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 03:45:26.728339 | orchestrator | 2026-02-05 03:45:26.728346 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-05 03:45:26.728364 | orchestrator | Thursday 05 February 2026 03:45:25 +0000 (0:00:01.636) 0:00:26.450 ***** 2026-02-05 03:45:26.728372 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-05 03:45:26.728380 | orchestrator |  "msg": [ 2026-02-05 03:45:26.728388 | orchestrator |  "Validator run completed.", 2026-02-05 03:45:26.728396 | orchestrator |  "You can find the report file here:", 2026-02-05 03:45:26.728404 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-05T03:45:00+00:00-report.json", 2026-02-05 03:45:26.728428 | orchestrator |  "on the following host:", 2026-02-05 03:45:26.728436 | orchestrator |  "testbed-manager" 2026-02-05 03:45:26.728444 | orchestrator |  ] 2026-02-05 03:45:26.728452 | orchestrator | } 2026-02-05 03:45:26.728460 | orchestrator | 2026-02-05 03:45:26.728468 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:45:26.728477 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 03:45:26.728487 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-05 03:45:26.728495 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-05 03:45:26.728503 | orchestrator | 2026-02-05 03:45:26.728511 | orchestrator | 2026-02-05 03:45:26.728519 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:45:26.728527 | orchestrator | Thursday 05 February 2026 03:45:26 +0000 (0:00:00.605) 0:00:27.055 ***** 2026-02-05 03:45:26.728535 | orchestrator | =============================================================================== 2026-02-05 03:45:26.728543 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.61s 2026-02-05 03:45:26.728550 | orchestrator | Aggregate test results step one ----------------------------------------- 1.76s 2026-02-05 03:45:26.728558 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.66s 2026-02-05 03:45:26.728566 | orchestrator | Write report file ------------------------------------------------------- 1.64s 2026-02-05 03:45:26.728573 | orchestrator | Get timestamp for report file ------------------------------------------- 0.87s 2026-02-05 03:45:26.728581 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.82s 2026-02-05 03:45:26.728589 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.78s 2026-02-05 03:45:26.728597 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-02-05 03:45:26.728604 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.71s 2026-02-05 03:45:26.728612 | orchestrator | Aggregate test results step one ----------------------------------------- 0.69s 2026-02-05 03:45:26.728620 | orchestrator | Print report file information ------------------------------------------- 0.61s 2026-02-05 03:45:26.728628 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.55s 2026-02-05 03:45:26.728635 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.53s 2026-02-05 03:45:26.728648 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.53s 2026-02-05 03:45:26.728660 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.52s 2026-02-05 03:45:26.728680 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2026-02-05 03:45:26.728695 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.51s 2026-02-05 03:45:26.728706 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.51s 2026-02-05 03:45:26.728717 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.50s 2026-02-05 03:45:26.728729 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.39s 2026-02-05 03:45:27.070959 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-05 03:45:27.079691 | orchestrator | + set -e 2026-02-05 03:45:27.079863 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 03:45:27.079888 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 03:45:27.079905 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 03:45:27.079921 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 03:45:27.079938 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 03:45:27.079955 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 03:45:27.079973 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 03:45:27.079989 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 03:45:27.080006 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 03:45:27.080022 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 03:45:27.080038 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 03:45:27.080055 | orchestrator | ++ export ARA=false 2026-02-05 03:45:27.080071 | orchestrator | ++ ARA=false 2026-02-05 03:45:27.080087 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 03:45:27.080103 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 03:45:27.080120 | orchestrator | ++ export TEMPEST=false 2026-02-05 03:45:27.080136 | orchestrator | ++ TEMPEST=false 2026-02-05 03:45:27.080152 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 03:45:27.080169 | orchestrator | ++ IS_ZUUL=true 2026-02-05 03:45:27.080186 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 03:45:27.080202 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 03:45:27.080218 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 03:45:27.080310 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 03:45:27.080330 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 03:45:27.080348 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 03:45:27.080366 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 03:45:27.080383 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 03:45:27.080401 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 03:45:27.080419 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 03:45:27.080452 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-05 03:45:27.080472 | orchestrator | + source /etc/os-release 2026-02-05 03:45:27.080490 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-02-05 03:45:27.080508 | orchestrator | ++ NAME=Ubuntu 2026-02-05 03:45:27.080525 | orchestrator | ++ VERSION_ID=24.04 2026-02-05 03:45:27.080542 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-02-05 03:45:27.080560 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-05 03:45:27.080577 | orchestrator | ++ ID=ubuntu 2026-02-05 03:45:27.080594 | orchestrator | ++ ID_LIKE=debian 2026-02-05 03:45:27.080611 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-05 03:45:27.080628 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-05 03:45:27.080646 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-05 03:45:27.080665 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-05 03:45:27.080683 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-05 03:45:27.080701 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-05 03:45:27.080718 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-05 03:45:27.080737 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-05 03:45:27.080757 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-05 03:45:27.109192 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-05 03:45:50.036086 | orchestrator | 2026-02-05 03:45:50.036179 | orchestrator | # Status of Elasticsearch 2026-02-05 03:45:50.036189 | orchestrator | 2026-02-05 03:45:50.036197 | orchestrator | + pushd /opt/configuration/contrib 2026-02-05 03:45:50.036205 | orchestrator | + echo 2026-02-05 03:45:50.036213 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-05 03:45:50.036219 | orchestrator | + echo 2026-02-05 03:45:50.036227 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-05 03:45:50.207495 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-05 03:45:50.207595 | orchestrator | 2026-02-05 03:45:50.207607 | orchestrator | # Status of MariaDB 2026-02-05 03:45:50.207615 | orchestrator | 2026-02-05 03:45:50.207622 | orchestrator | + echo 2026-02-05 03:45:50.207654 | orchestrator | + echo '# Status of MariaDB' 2026-02-05 03:45:50.207662 | orchestrator | + echo 2026-02-05 03:45:50.208294 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-05 03:45:50.265315 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 03:45:50.265414 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-05 03:45:50.265430 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-05 03:45:50.265443 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-05 03:45:50.321057 | orchestrator | Reading package lists... 2026-02-05 03:45:50.699588 | orchestrator | Building dependency tree... 2026-02-05 03:45:50.700129 | orchestrator | Reading state information... 2026-02-05 03:45:51.108111 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-05 03:45:51.108181 | orchestrator | bc set to manually installed. 2026-02-05 03:45:51.108187 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded. 2026-02-05 03:45:51.783491 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-05 03:45:51.783966 | orchestrator | 2026-02-05 03:45:51.783998 | orchestrator | # Status of Prometheus 2026-02-05 03:45:51.784018 | orchestrator | + echo 2026-02-05 03:45:51.784036 | orchestrator | + echo '# Status of Prometheus' 2026-02-05 03:45:51.784053 | orchestrator | 2026-02-05 03:45:51.784070 | orchestrator | + echo 2026-02-05 03:45:51.784087 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-05 03:45:51.848568 | orchestrator | Unauthorized 2026-02-05 03:45:51.851936 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-05 03:45:51.919373 | orchestrator | Unauthorized 2026-02-05 03:45:51.922508 | orchestrator | 2026-02-05 03:45:51.922572 | orchestrator | # Status of RabbitMQ 2026-02-05 03:45:51.922586 | orchestrator | 2026-02-05 03:45:51.922597 | orchestrator | + echo 2026-02-05 03:45:51.922609 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-05 03:45:51.922620 | orchestrator | + echo 2026-02-05 03:45:51.923423 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-05 03:45:51.978439 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 03:45:51.978523 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-05 03:45:51.978536 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-05 03:45:52.444840 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-05 03:45:52.453488 | orchestrator | 2026-02-05 03:45:52.453569 | orchestrator | # Status of Redis 2026-02-05 03:45:52.453580 | orchestrator | 2026-02-05 03:45:52.453589 | orchestrator | + echo 2026-02-05 03:45:52.453598 | orchestrator | + echo '# Status of Redis' 2026-02-05 03:45:52.453607 | orchestrator | + echo 2026-02-05 03:45:52.453616 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-05 03:45:52.458654 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002151s;;;0.000000;10.000000 2026-02-05 03:45:52.458887 | orchestrator | + popd 2026-02-05 03:45:52.459244 | orchestrator | 2026-02-05 03:45:52.459276 | orchestrator | # Create backup of MariaDB database 2026-02-05 03:45:52.459291 | orchestrator | 2026-02-05 03:45:52.459305 | orchestrator | + echo 2026-02-05 03:45:52.459319 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-05 03:45:52.459331 | orchestrator | + echo 2026-02-05 03:45:52.459345 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-05 03:45:54.564557 | orchestrator | 2026-02-05 03:45:54 | INFO  | Task ebff9783-7efb-4c20-9b75-1eb718c49a0d (mariadb_backup) was prepared for execution. 2026-02-05 03:45:54.564672 | orchestrator | 2026-02-05 03:45:54 | INFO  | It takes a moment until task ebff9783-7efb-4c20-9b75-1eb718c49a0d (mariadb_backup) has been started and output is visible here. 2026-02-05 03:47:14.710786 | orchestrator | 2026-02-05 03:47:14.710897 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 03:47:14.711011 | orchestrator | 2026-02-05 03:47:14.711024 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 03:47:14.711036 | orchestrator | Thursday 05 February 2026 03:45:58 +0000 (0:00:00.171) 0:00:00.171 ***** 2026-02-05 03:47:14.711048 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:47:14.711060 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:47:14.711071 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:47:14.711082 | orchestrator | 2026-02-05 03:47:14.711120 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 03:47:14.711132 | orchestrator | Thursday 05 February 2026 03:45:59 +0000 (0:00:00.324) 0:00:00.496 ***** 2026-02-05 03:47:14.711143 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-05 03:47:14.711154 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-05 03:47:14.711164 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-05 03:47:14.711175 | orchestrator | 2026-02-05 03:47:14.711185 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-05 03:47:14.711196 | orchestrator | 2026-02-05 03:47:14.711206 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-05 03:47:14.711217 | orchestrator | Thursday 05 February 2026 03:45:59 +0000 (0:00:00.630) 0:00:01.127 ***** 2026-02-05 03:47:14.711228 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 03:47:14.711239 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 03:47:14.711250 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 03:47:14.711261 | orchestrator | 2026-02-05 03:47:14.711271 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 03:47:14.711282 | orchestrator | Thursday 05 February 2026 03:46:00 +0000 (0:00:00.482) 0:00:01.610 ***** 2026-02-05 03:47:14.711293 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 03:47:14.711307 | orchestrator | 2026-02-05 03:47:14.711320 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-05 03:47:14.711347 | orchestrator | Thursday 05 February 2026 03:46:00 +0000 (0:00:00.539) 0:00:02.150 ***** 2026-02-05 03:47:14.711361 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:47:14.711373 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:47:14.711386 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:47:14.711398 | orchestrator | 2026-02-05 03:47:14.711411 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-05 03:47:14.711423 | orchestrator | Thursday 05 February 2026 03:46:04 +0000 (0:00:03.216) 0:00:05.366 ***** 2026-02-05 03:47:14.711436 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-05 03:47:14.711448 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-05 03:47:14.711462 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-05 03:47:14.711475 | orchestrator | mariadb_bootstrap_restart 2026-02-05 03:47:14.711487 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:47:14.711500 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:47:14.711511 | orchestrator | changed: [testbed-node-0] 2026-02-05 03:47:14.711524 | orchestrator | 2026-02-05 03:47:14.711536 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-05 03:47:14.711549 | orchestrator | skipping: no hosts matched 2026-02-05 03:47:14.711560 | orchestrator | 2026-02-05 03:47:14.711574 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-05 03:47:14.711586 | orchestrator | skipping: no hosts matched 2026-02-05 03:47:14.711598 | orchestrator | 2026-02-05 03:47:14.711613 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-05 03:47:14.711633 | orchestrator | skipping: no hosts matched 2026-02-05 03:47:14.711653 | orchestrator | 2026-02-05 03:47:14.711674 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-05 03:47:14.711691 | orchestrator | 2026-02-05 03:47:14.711702 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-05 03:47:14.711713 | orchestrator | Thursday 05 February 2026 03:47:13 +0000 (0:01:09.603) 0:01:14.970 ***** 2026-02-05 03:47:14.711724 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:47:14.711734 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:47:14.711745 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:47:14.711755 | orchestrator | 2026-02-05 03:47:14.711766 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-05 03:47:14.711787 | orchestrator | Thursday 05 February 2026 03:47:13 +0000 (0:00:00.303) 0:01:15.274 ***** 2026-02-05 03:47:14.711797 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:47:14.711808 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:47:14.711818 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:47:14.711829 | orchestrator | 2026-02-05 03:47:14.711884 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:47:14.711898 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:47:14.711931 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 03:47:14.711943 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 03:47:14.711954 | orchestrator | 2026-02-05 03:47:14.711965 | orchestrator | 2026-02-05 03:47:14.711975 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:47:14.711986 | orchestrator | Thursday 05 February 2026 03:47:14 +0000 (0:00:00.426) 0:01:15.700 ***** 2026-02-05 03:47:14.711996 | orchestrator | =============================================================================== 2026-02-05 03:47:14.712007 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 69.60s 2026-02-05 03:47:14.712039 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.22s 2026-02-05 03:47:14.712051 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-02-05 03:47:14.712062 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2026-02-05 03:47:14.712073 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.48s 2026-02-05 03:47:14.712084 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.43s 2026-02-05 03:47:14.712094 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-05 03:47:14.712105 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-02-05 03:47:15.023542 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-05 03:47:15.031276 | orchestrator | + set -e 2026-02-05 03:47:15.031392 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 03:47:15.031408 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 03:47:15.031421 | orchestrator | ++ INTERACTIVE=false 2026-02-05 03:47:15.031432 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 03:47:15.031443 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 03:47:15.031454 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-05 03:47:15.031610 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-05 03:47:15.035194 | orchestrator | 2026-02-05 03:47:15.035280 | orchestrator | # OpenStack endpoints 2026-02-05 03:47:15.035296 | orchestrator | 2026-02-05 03:47:15.035308 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 03:47:15.035319 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 03:47:15.035330 | orchestrator | + export OS_CLOUD=admin 2026-02-05 03:47:15.035341 | orchestrator | + OS_CLOUD=admin 2026-02-05 03:47:15.035352 | orchestrator | + echo 2026-02-05 03:47:15.035363 | orchestrator | + echo '# OpenStack endpoints' 2026-02-05 03:47:15.035374 | orchestrator | + echo 2026-02-05 03:47:15.035385 | orchestrator | + openstack endpoint list 2026-02-05 03:47:18.338639 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-05 03:47:18.338750 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-05 03:47:18.338776 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-05 03:47:18.338822 | orchestrator | | 019fe405c97b469f993800eda5a89f95 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-05 03:47:18.338853 | orchestrator | | 05763af662cd4820843394fb7453540d | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-05 03:47:18.338865 | orchestrator | | 082082a6bd1e4ff4b67ddb04d94ce1ca | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-05 03:47:18.338876 | orchestrator | | 09ca1cff77ed43ffada8f0eb42c146e2 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-05 03:47:18.338888 | orchestrator | | 2018b59533f04ccb9094581dc93990ac | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-05 03:47:18.338899 | orchestrator | | 2d41572841204fafb5bf08ce5ff8e78b | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-05 03:47:18.338960 | orchestrator | | 2fd21ac082f14b258f21634feb7da4eb | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-05 03:47:18.338975 | orchestrator | | 30cbadd7af1f47b68235dcb1491e9cb4 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-05 03:47:18.338986 | orchestrator | | 32cb407434e1410da64e2cecc19d0eb6 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-05 03:47:18.338997 | orchestrator | | 368f3b2e116442c2bce25a8d4768b9a7 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-05 03:47:18.339008 | orchestrator | | 3e753eacefc5466b979236acd05f9814 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-05 03:47:18.339019 | orchestrator | | 42f932510aea4c93b1924f4b3bccc9c8 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-05 03:47:18.339030 | orchestrator | | 5d6987cba9974470bada1935495b2997 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-05 03:47:18.339041 | orchestrator | | 5f3975aa37e64945bd7e6b1778cfe205 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-05 03:47:18.339052 | orchestrator | | 7cf551a0264a4400b92b953cea361665 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-05 03:47:18.339063 | orchestrator | | 7ff4bae06d4346ae9bcbabd076fa843f | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-05 03:47:18.339073 | orchestrator | | 87e10fb6e53c40a2885f940f166a3378 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-05 03:47:18.339085 | orchestrator | | 895adc1443654077b5738c080a2c1c5c | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-05 03:47:18.339095 | orchestrator | | 8e9f543a45ca4df684435aab243eba9f | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-05 03:47:18.339106 | orchestrator | | 910145fe1c864820a5b88a1b2b20e9d1 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-05 03:47:18.339136 | orchestrator | | 97255e5fd29e49a4bf6d143917d29b87 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-05 03:47:18.339158 | orchestrator | | ad3e749f038c4c94a45d332e055df382 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-05 03:47:18.339187 | orchestrator | | c2bef90e21e4444eb2e806bf7d7c3d7f | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-05 03:47:18.339201 | orchestrator | | c862817538844a84b8a489a4cd1cf129 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-05 03:47:18.339213 | orchestrator | | ccbeae1723f4408b82c66c052e7a882d | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-05 03:47:18.339226 | orchestrator | | d840582730614e50bd69ff72c4981bdc | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-05 03:47:18.339238 | orchestrator | | dac5f5815afa4b10a0c1928406746b9e | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-05 03:47:18.339251 | orchestrator | | ea818994af6c4770bd09e54ed8e59be4 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-05 03:47:18.339264 | orchestrator | | edcde7a29ee04839ac079e106bd01e21 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-05 03:47:18.339277 | orchestrator | | f52672948ac34e24892b2d7f6a77a44a | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-05 03:47:18.339290 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-05 03:47:18.605384 | orchestrator | 2026-02-05 03:47:18.605452 | orchestrator | # Cinder 2026-02-05 03:47:18.605458 | orchestrator | 2026-02-05 03:47:18.605463 | orchestrator | + echo 2026-02-05 03:47:18.605467 | orchestrator | + echo '# Cinder' 2026-02-05 03:47:18.605471 | orchestrator | + echo 2026-02-05 03:47:18.605475 | orchestrator | + openstack volume service list 2026-02-05 03:47:21.242784 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-05 03:47:21.242875 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-05 03:47:21.242897 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-05 03:47:21.242905 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-05T03:47:18.000000 | 2026-02-05 03:47:21.242912 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-05T03:47:18.000000 | 2026-02-05 03:47:21.242933 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-05T03:47:18.000000 | 2026-02-05 03:47:21.242940 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-05T03:47:18.000000 | 2026-02-05 03:47:21.242946 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-05T03:47:13.000000 | 2026-02-05 03:47:21.242952 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-05T03:47:14.000000 | 2026-02-05 03:47:21.242959 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-05T03:47:12.000000 | 2026-02-05 03:47:21.242972 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-05T03:47:13.000000 | 2026-02-05 03:47:21.242978 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-05T03:47:13.000000 | 2026-02-05 03:47:21.242999 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-05 03:47:21.504453 | orchestrator | 2026-02-05 03:47:21.504565 | orchestrator | # Neutron 2026-02-05 03:47:21.504587 | orchestrator | 2026-02-05 03:47:21.504606 | orchestrator | + echo 2026-02-05 03:47:21.504624 | orchestrator | + echo '# Neutron' 2026-02-05 03:47:21.504642 | orchestrator | + echo 2026-02-05 03:47:21.504659 | orchestrator | + openstack network agent list 2026-02-05 03:47:24.224402 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-05 03:47:24.224529 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-05 03:47:24.224552 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-05 03:47:24.224568 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-05 03:47:24.224583 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-05 03:47:24.224599 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-05 03:47:24.224613 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-05 03:47:24.224653 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-05 03:47:24.224668 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-05 03:47:24.224683 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-05 03:47:24.224698 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-05 03:47:24.224713 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-05 03:47:24.224728 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-05 03:47:24.500456 | orchestrator | + openstack network service provider list 2026-02-05 03:47:27.042775 | orchestrator | +---------------+------+---------+ 2026-02-05 03:47:27.042878 | orchestrator | | Service Type | Name | Default | 2026-02-05 03:47:27.042888 | orchestrator | +---------------+------+---------+ 2026-02-05 03:47:27.042904 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-05 03:47:27.042911 | orchestrator | +---------------+------+---------+ 2026-02-05 03:47:27.329664 | orchestrator | 2026-02-05 03:47:27.329747 | orchestrator | # Nova 2026-02-05 03:47:27.329761 | orchestrator | 2026-02-05 03:47:27.329770 | orchestrator | + echo 2026-02-05 03:47:27.329780 | orchestrator | + echo '# Nova' 2026-02-05 03:47:27.329789 | orchestrator | + echo 2026-02-05 03:47:27.329798 | orchestrator | + openstack compute service list 2026-02-05 03:47:30.054991 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-05 03:47:30.055085 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-05 03:47:30.055094 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-05 03:47:30.055101 | orchestrator | | c4031a1a-b1f9-4383-963a-3ecdddb30d57 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-05T03:47:26.000000 | 2026-02-05 03:47:30.055135 | orchestrator | | 8ca4cf25-65a7-4283-8061-b4d25e0ae34b | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-05T03:47:29.000000 | 2026-02-05 03:47:30.055142 | orchestrator | | a2e497ec-87ad-41a4-b1af-39cbb63b8206 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-05T03:47:29.000000 | 2026-02-05 03:47:30.055148 | orchestrator | | 6dabe610-a8b7-43b8-96da-7e1d901a773b | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-05T03:47:28.000000 | 2026-02-05 03:47:30.055154 | orchestrator | | 085cb538-fbe4-46fd-968c-f3033b4ad770 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-05T03:47:28.000000 | 2026-02-05 03:47:30.055160 | orchestrator | | 46650bc5-68bd-4b5b-b5a7-314d3a77be4c | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-05T03:47:29.000000 | 2026-02-05 03:47:30.055166 | orchestrator | | 1cb5c29e-f8ab-4eb9-851a-ccc3ad01ae60 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-05T03:47:23.000000 | 2026-02-05 03:47:30.055173 | orchestrator | | c1a5aa4e-5900-4e01-a4f5-aaa68b14f2c2 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-05T03:47:25.000000 | 2026-02-05 03:47:30.055179 | orchestrator | | b90e56b2-60ff-4539-b4a2-d13f2e71adbb | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-05T03:47:25.000000 | 2026-02-05 03:47:30.055185 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-05 03:47:30.362757 | orchestrator | + openstack hypervisor list 2026-02-05 03:47:33.532337 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-05 03:47:33.532451 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-05 03:47:33.532467 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-05 03:47:33.532479 | orchestrator | | aba0d66a-7771-4d9e-84b2-1c90064381e3 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-05 03:47:33.532490 | orchestrator | | 711f3d18-34df-451c-be90-d0131636e000 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-05 03:47:33.532504 | orchestrator | | 2d5845e2-cf56-4e28-88ab-c2a7c355aa22 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-05 03:47:33.532524 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-05 03:47:33.818608 | orchestrator | 2026-02-05 03:47:33.818730 | orchestrator | # Run OpenStack test play 2026-02-05 03:47:33.818760 | orchestrator | 2026-02-05 03:47:33.818788 | orchestrator | + echo 2026-02-05 03:47:33.818808 | orchestrator | + echo '# Run OpenStack test play' 2026-02-05 03:47:33.818830 | orchestrator | + echo 2026-02-05 03:47:33.818849 | orchestrator | + osism apply --environment openstack test 2026-02-05 03:47:35.855800 | orchestrator | 2026-02-05 03:47:35 | INFO  | Trying to run play test in environment openstack 2026-02-05 03:47:46.055425 | orchestrator | 2026-02-05 03:47:46 | INFO  | Task cc46fa75-b7fe-478d-8b99-7ebff3e04067 (test) was prepared for execution. 2026-02-05 03:47:46.055539 | orchestrator | 2026-02-05 03:47:46 | INFO  | It takes a moment until task cc46fa75-b7fe-478d-8b99-7ebff3e04067 (test) has been started and output is visible here. 2026-02-05 03:50:31.545980 | orchestrator | 2026-02-05 03:50:31.546086 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-05 03:50:31.546094 | orchestrator | 2026-02-05 03:50:31.546098 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-05 03:50:31.546104 | orchestrator | Thursday 05 February 2026 03:47:50 +0000 (0:00:00.088) 0:00:00.088 ***** 2026-02-05 03:50:31.546108 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546113 | orchestrator | 2026-02-05 03:50:31.546118 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-05 03:50:31.546121 | orchestrator | Thursday 05 February 2026 03:47:54 +0000 (0:00:03.714) 0:00:03.802 ***** 2026-02-05 03:50:31.546125 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546129 | orchestrator | 2026-02-05 03:50:31.546148 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-05 03:50:31.546152 | orchestrator | Thursday 05 February 2026 03:47:58 +0000 (0:00:04.231) 0:00:08.034 ***** 2026-02-05 03:50:31.546156 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546160 | orchestrator | 2026-02-05 03:50:31.546210 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-05 03:50:31.546218 | orchestrator | Thursday 05 February 2026 03:48:04 +0000 (0:00:06.524) 0:00:14.558 ***** 2026-02-05 03:50:31.546224 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546231 | orchestrator | 2026-02-05 03:50:31.546236 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-05 03:50:31.546240 | orchestrator | Thursday 05 February 2026 03:48:08 +0000 (0:00:03.961) 0:00:18.520 ***** 2026-02-05 03:50:31.546243 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546247 | orchestrator | 2026-02-05 03:50:31.546251 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-05 03:50:31.546255 | orchestrator | Thursday 05 February 2026 03:48:13 +0000 (0:00:04.313) 0:00:22.833 ***** 2026-02-05 03:50:31.546260 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-05 03:50:31.546264 | orchestrator | changed: [localhost] => (item=member) 2026-02-05 03:50:31.546308 | orchestrator | changed: [localhost] => (item=creator) 2026-02-05 03:50:31.546314 | orchestrator | 2026-02-05 03:50:31.546318 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-05 03:50:31.546322 | orchestrator | Thursday 05 February 2026 03:48:24 +0000 (0:00:11.463) 0:00:34.297 ***** 2026-02-05 03:50:31.546325 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546329 | orchestrator | 2026-02-05 03:50:31.546333 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-05 03:50:31.546337 | orchestrator | Thursday 05 February 2026 03:48:28 +0000 (0:00:04.446) 0:00:38.744 ***** 2026-02-05 03:50:31.546340 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546344 | orchestrator | 2026-02-05 03:50:31.546348 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-05 03:50:31.546355 | orchestrator | Thursday 05 February 2026 03:48:33 +0000 (0:00:04.996) 0:00:43.740 ***** 2026-02-05 03:50:31.546360 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546366 | orchestrator | 2026-02-05 03:50:31.546373 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-05 03:50:31.546379 | orchestrator | Thursday 05 February 2026 03:48:38 +0000 (0:00:04.379) 0:00:48.120 ***** 2026-02-05 03:50:31.546386 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546391 | orchestrator | 2026-02-05 03:50:31.546397 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-05 03:50:31.546402 | orchestrator | Thursday 05 February 2026 03:48:42 +0000 (0:00:04.002) 0:00:52.122 ***** 2026-02-05 03:50:31.546407 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546413 | orchestrator | 2026-02-05 03:50:31.546418 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-05 03:50:31.546425 | orchestrator | Thursday 05 February 2026 03:48:46 +0000 (0:00:03.996) 0:00:56.119 ***** 2026-02-05 03:50:31.546431 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546437 | orchestrator | 2026-02-05 03:50:31.546442 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-05 03:50:31.546449 | orchestrator | Thursday 05 February 2026 03:48:50 +0000 (0:00:03.863) 0:00:59.982 ***** 2026-02-05 03:50:31.546454 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546458 | orchestrator | 2026-02-05 03:50:31.546462 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-05 03:50:31.546466 | orchestrator | Thursday 05 February 2026 03:48:54 +0000 (0:00:04.696) 0:01:04.679 ***** 2026-02-05 03:50:31.546470 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546473 | orchestrator | 2026-02-05 03:50:31.546477 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-05 03:50:31.546481 | orchestrator | Thursday 05 February 2026 03:49:00 +0000 (0:00:05.306) 0:01:09.985 ***** 2026-02-05 03:50:31.546491 | orchestrator | changed: [localhost] 2026-02-05 03:50:31.546495 | orchestrator | 2026-02-05 03:50:31.546499 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-05 03:50:31.546503 | orchestrator | 2026-02-05 03:50:31.546508 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-05 03:50:31.546514 | orchestrator | Thursday 05 February 2026 03:49:11 +0000 (0:00:10.914) 0:01:20.900 ***** 2026-02-05 03:50:31.546520 | orchestrator | ok: [localhost] 2026-02-05 03:50:31.546526 | orchestrator | 2026-02-05 03:50:31.546531 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-05 03:50:31.546538 | orchestrator | Thursday 05 February 2026 03:49:14 +0000 (0:00:03.556) 0:01:24.456 ***** 2026-02-05 03:50:31.546543 | orchestrator | skipping: [localhost] 2026-02-05 03:50:31.546549 | orchestrator | 2026-02-05 03:50:31.546556 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-05 03:50:31.546563 | orchestrator | Thursday 05 February 2026 03:49:14 +0000 (0:00:00.043) 0:01:24.499 ***** 2026-02-05 03:50:31.546570 | orchestrator | skipping: [localhost] 2026-02-05 03:50:31.546577 | orchestrator | 2026-02-05 03:50:31.546583 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-05 03:50:31.546604 | orchestrator | Thursday 05 February 2026 03:49:14 +0000 (0:00:00.051) 0:01:24.551 ***** 2026-02-05 03:50:31.546610 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-05 03:50:31.546618 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-05 03:50:31.546638 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-05 03:50:31.546646 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-05 03:50:31.546652 | orchestrator | skipping: [localhost] => (item=test)  2026-02-05 03:50:31.546659 | orchestrator | skipping: [localhost] 2026-02-05 03:50:31.546665 | orchestrator | 2026-02-05 03:50:31.546672 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-05 03:50:31.546678 | orchestrator | Thursday 05 February 2026 03:49:14 +0000 (0:00:00.151) 0:01:24.702 ***** 2026-02-05 03:50:31.546684 | orchestrator | skipping: [localhost] 2026-02-05 03:50:31.546690 | orchestrator | 2026-02-05 03:50:31.546697 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-05 03:50:31.546703 | orchestrator | Thursday 05 February 2026 03:49:15 +0000 (0:00:00.158) 0:01:24.860 ***** 2026-02-05 03:50:31.546710 | orchestrator | changed: [localhost] => (item=test) 2026-02-05 03:50:31.546716 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-05 03:50:31.546723 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-05 03:50:31.546730 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-05 03:50:31.546736 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-05 03:50:31.546742 | orchestrator | 2026-02-05 03:50:31.546749 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-05 03:50:31.546756 | orchestrator | Thursday 05 February 2026 03:49:19 +0000 (0:00:04.896) 0:01:29.757 ***** 2026-02-05 03:50:31.546764 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-05 03:50:31.546772 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-05 03:50:31.546781 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-05 03:50:31.546792 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-05 03:50:31.546804 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-02-05 03:50:31.546816 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j90004796827.3698', 'results_file': '/ansible/.ansible_async/j90004796827.3698', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-05 03:50:31.546831 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j140083761906.3723', 'results_file': '/ansible/.ansible_async/j140083761906.3723', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-05 03:50:31.546851 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j186818356442.3748', 'results_file': '/ansible/.ansible_async/j186818356442.3748', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-05 03:50:31.546863 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j230800555493.3773', 'results_file': '/ansible/.ansible_async/j230800555493.3773', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-05 03:50:31.546872 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j972055918210.3798', 'results_file': '/ansible/.ansible_async/j972055918210.3798', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-05 03:50:31.546880 | orchestrator | 2026-02-05 03:50:31.546890 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-05 03:50:31.546898 | orchestrator | Thursday 05 February 2026 03:50:17 +0000 (0:00:57.394) 0:02:27.151 ***** 2026-02-05 03:50:31.546905 | orchestrator | changed: [localhost] => (item=test) 2026-02-05 03:50:31.546912 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-05 03:50:31.546917 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-05 03:50:31.546923 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-05 03:50:31.546929 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-05 03:50:31.546936 | orchestrator | 2026-02-05 03:50:31.546946 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-05 03:50:31.546956 | orchestrator | Thursday 05 February 2026 03:50:22 +0000 (0:00:04.713) 0:02:31.865 ***** 2026-02-05 03:50:31.546968 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-05 03:50:31.546980 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j191703425808.3908', 'results_file': '/ansible/.ansible_async/j191703425808.3908', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-05 03:50:31.546990 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j974442711754.3933', 'results_file': '/ansible/.ansible_async/j974442711754.3933', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-05 03:50:31.547004 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j433137256896.3958', 'results_file': '/ansible/.ansible_async/j433137256896.3958', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-05 03:50:31.547038 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j9587462690.3983', 'results_file': '/ansible/.ansible_async/j9587462690.3983', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-05 03:51:12.355911 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j615772578991.4008', 'results_file': '/ansible/.ansible_async/j615772578991.4008', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-05 03:51:12.355988 | orchestrator | 2026-02-05 03:51:12.355995 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-05 03:51:12.356001 | orchestrator | Thursday 05 February 2026 03:50:31 +0000 (0:00:09.429) 0:02:41.295 ***** 2026-02-05 03:51:12.356005 | orchestrator | changed: [localhost] => (item=test) 2026-02-05 03:51:12.356012 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-05 03:51:12.356016 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-05 03:51:12.356020 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-05 03:51:12.356024 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-05 03:51:12.356027 | orchestrator | 2026-02-05 03:51:12.356046 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-05 03:51:12.356051 | orchestrator | Thursday 05 February 2026 03:50:36 +0000 (0:00:05.013) 0:02:46.308 ***** 2026-02-05 03:51:12.356055 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-05 03:51:12.356060 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j819578608576.4077', 'results_file': '/ansible/.ansible_async/j819578608576.4077', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-05 03:51:12.356065 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j697723867330.4102', 'results_file': '/ansible/.ansible_async/j697723867330.4102', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-05 03:51:12.356068 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j357704018844.4128', 'results_file': '/ansible/.ansible_async/j357704018844.4128', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-05 03:51:12.356072 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j669828421170.4154', 'results_file': '/ansible/.ansible_async/j669828421170.4154', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-05 03:51:12.356076 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j586158682268.4180', 'results_file': '/ansible/.ansible_async/j586158682268.4180', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-05 03:51:12.356080 | orchestrator | 2026-02-05 03:51:12.356084 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-05 03:51:12.356087 | orchestrator | Thursday 05 February 2026 03:50:46 +0000 (0:00:10.067) 0:02:56.376 ***** 2026-02-05 03:51:12.356091 | orchestrator | changed: [localhost] 2026-02-05 03:51:12.356095 | orchestrator | 2026-02-05 03:51:12.356099 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-05 03:51:12.356103 | orchestrator | Thursday 05 February 2026 03:50:53 +0000 (0:00:06.576) 0:03:02.952 ***** 2026-02-05 03:51:12.356106 | orchestrator | changed: [localhost] 2026-02-05 03:51:12.356110 | orchestrator | 2026-02-05 03:51:12.356114 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-05 03:51:12.356118 | orchestrator | Thursday 05 February 2026 03:51:06 +0000 (0:00:13.497) 0:03:16.450 ***** 2026-02-05 03:51:12.356121 | orchestrator | ok: [localhost] 2026-02-05 03:51:12.356125 | orchestrator | 2026-02-05 03:51:12.356129 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-05 03:51:12.356133 | orchestrator | Thursday 05 February 2026 03:51:11 +0000 (0:00:05.276) 0:03:21.726 ***** 2026-02-05 03:51:12.356136 | orchestrator | ok: [localhost] => { 2026-02-05 03:51:12.356140 | orchestrator |  "msg": "192.168.112.197" 2026-02-05 03:51:12.356144 | orchestrator | } 2026-02-05 03:51:12.356148 | orchestrator | 2026-02-05 03:51:12.356152 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:51:12.356157 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 03:51:12.356161 | orchestrator | 2026-02-05 03:51:12.356165 | orchestrator | 2026-02-05 03:51:12.356169 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:51:12.356172 | orchestrator | Thursday 05 February 2026 03:51:12 +0000 (0:00:00.046) 0:03:21.773 ***** 2026-02-05 03:51:12.356176 | orchestrator | =============================================================================== 2026-02-05 03:51:12.356180 | orchestrator | Wait for instance creation to complete --------------------------------- 57.39s 2026-02-05 03:51:12.356183 | orchestrator | Attach test volume ----------------------------------------------------- 13.50s 2026-02-05 03:51:12.356187 | orchestrator | Add member roles to user test ------------------------------------------ 11.46s 2026-02-05 03:51:12.356207 | orchestrator | Create test router ----------------------------------------------------- 10.91s 2026-02-05 03:51:12.356245 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.07s 2026-02-05 03:51:12.356249 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.43s 2026-02-05 03:51:12.356253 | orchestrator | Create test volume ------------------------------------------------------ 6.58s 2026-02-05 03:51:12.356267 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.52s 2026-02-05 03:51:12.356271 | orchestrator | Create test subnet ------------------------------------------------------ 5.31s 2026-02-05 03:51:12.356275 | orchestrator | Create floating ip address ---------------------------------------------- 5.28s 2026-02-05 03:51:12.356279 | orchestrator | Add tag to instances ---------------------------------------------------- 5.01s 2026-02-05 03:51:12.356282 | orchestrator | Create ssh security group ----------------------------------------------- 5.00s 2026-02-05 03:51:12.356286 | orchestrator | Create test instances --------------------------------------------------- 4.90s 2026-02-05 03:51:12.356290 | orchestrator | Add metadata to instances ----------------------------------------------- 4.71s 2026-02-05 03:51:12.356293 | orchestrator | Create test network ----------------------------------------------------- 4.70s 2026-02-05 03:51:12.356297 | orchestrator | Create test server group ------------------------------------------------ 4.45s 2026-02-05 03:51:12.356301 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.38s 2026-02-05 03:51:12.356304 | orchestrator | Create test user -------------------------------------------------------- 4.31s 2026-02-05 03:51:12.356308 | orchestrator | Create test-admin user -------------------------------------------------- 4.23s 2026-02-05 03:51:12.356312 | orchestrator | Create icmp security group ---------------------------------------------- 4.00s 2026-02-05 03:51:12.714086 | orchestrator | + server_list 2026-02-05 03:51:12.714172 | orchestrator | + openstack --os-cloud test server list 2026-02-05 03:51:16.410989 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-05 03:51:16.411107 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-05 03:51:16.411121 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-05 03:51:16.411130 | orchestrator | | 7e3c75c8-21b5-4ac0-86b2-691f80aaedf5 | test-4 | ACTIVE | test=192.168.112.148, 192.168.200.79 | N/A (booted from volume) | SCS-1L-1 | 2026-02-05 03:51:16.411138 | orchestrator | | 26ad732b-d299-4c5a-8b40-e7f02fab347b | test-2 | ACTIVE | test=192.168.112.125, 192.168.200.237 | N/A (booted from volume) | SCS-1L-1 | 2026-02-05 03:51:16.411146 | orchestrator | | ed8f58c9-57a2-44b4-8561-739529bf3fba | test-3 | ACTIVE | test=192.168.112.169, 192.168.200.5 | N/A (booted from volume) | SCS-1L-1 | 2026-02-05 03:51:16.411154 | orchestrator | | 55f48e08-9ffe-46f2-819a-a97d08531332 | test | ACTIVE | test=192.168.112.197, 192.168.200.251 | N/A (booted from volume) | SCS-1L-1 | 2026-02-05 03:51:16.411162 | orchestrator | | 5af72c23-5120-410f-bc28-632be77919a3 | test-1 | ACTIVE | test=192.168.112.143, 192.168.200.17 | N/A (booted from volume) | SCS-1L-1 | 2026-02-05 03:51:16.411170 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-05 03:51:16.680039 | orchestrator | + openstack --os-cloud test server show test 2026-02-05 03:51:20.150396 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:20.150552 | orchestrator | | Field | Value | 2026-02-05 03:51:20.150616 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:20.150648 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-05 03:51:20.150668 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-05 03:51:20.150686 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-05 03:51:20.150742 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-05 03:51:20.150765 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-05 03:51:20.150785 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-05 03:51:20.150832 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-05 03:51:20.150854 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-05 03:51:20.150890 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-05 03:51:20.150912 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-05 03:51:20.150942 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-05 03:51:20.150965 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-05 03:51:20.150986 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-05 03:51:20.151007 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-05 03:51:20.151028 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-05 03:51:20.151049 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-05T03:49:51.000000 | 2026-02-05 03:51:20.151083 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-05 03:51:20.151126 | orchestrator | | accessIPv4 | | 2026-02-05 03:51:20.151150 | orchestrator | | accessIPv6 | | 2026-02-05 03:51:20.151173 | orchestrator | | addresses | test=192.168.112.197, 192.168.200.251 | 2026-02-05 03:51:20.151203 | orchestrator | | config_drive | | 2026-02-05 03:51:20.151252 | orchestrator | | created | 2026-02-05T03:49:25Z | 2026-02-05 03:51:20.151273 | orchestrator | | description | None | 2026-02-05 03:51:20.151305 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-05 03:51:20.151323 | orchestrator | | hostId | 5ceb9a361d8bc27b2c610213b4beb40710504efb067ebae8552c751f | 2026-02-05 03:51:20.151342 | orchestrator | | host_status | None | 2026-02-05 03:51:20.151386 | orchestrator | | id | 55f48e08-9ffe-46f2-819a-a97d08531332 | 2026-02-05 03:51:20.151405 | orchestrator | | image | N/A (booted from volume) | 2026-02-05 03:51:20.151438 | orchestrator | | key_name | test | 2026-02-05 03:51:20.151459 | orchestrator | | locked | False | 2026-02-05 03:51:20.151479 | orchestrator | | locked_reason | None | 2026-02-05 03:51:20.151497 | orchestrator | | name | test | 2026-02-05 03:51:20.151516 | orchestrator | | pinned_availability_zone | None | 2026-02-05 03:51:20.151548 | orchestrator | | progress | 0 | 2026-02-05 03:51:20.151568 | orchestrator | | project_id | 50cadd20dfba472b842bdcf0431bef14 | 2026-02-05 03:51:20.151587 | orchestrator | | properties | hostname='test' | 2026-02-05 03:51:20.151638 | orchestrator | | security_groups | name='icmp' | 2026-02-05 03:51:20.151652 | orchestrator | | | name='ssh' | 2026-02-05 03:51:20.151664 | orchestrator | | server_groups | None | 2026-02-05 03:51:20.151675 | orchestrator | | status | ACTIVE | 2026-02-05 03:51:20.151691 | orchestrator | | tags | test | 2026-02-05 03:51:20.151703 | orchestrator | | trusted_image_certificates | None | 2026-02-05 03:51:20.151714 | orchestrator | | updated | 2026-02-05T03:50:23Z | 2026-02-05 03:51:20.151725 | orchestrator | | user_id | e98a26ca7ccf4ec5bdab2d1b2dae5725 | 2026-02-05 03:51:20.151737 | orchestrator | | volumes_attached | delete_on_termination='True', id='bc7205f9-ab27-498f-bff0-0d2a1944b4d0' | 2026-02-05 03:51:20.151755 | orchestrator | | | delete_on_termination='False', id='4ceeff5c-40c7-47c4-9390-b0511e97e883' | 2026-02-05 03:51:20.153884 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:20.467399 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-05 03:51:23.507776 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:23.507878 | orchestrator | | Field | Value | 2026-02-05 03:51:23.507913 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:23.507926 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-05 03:51:23.507938 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-05 03:51:23.507950 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-05 03:51:23.507961 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-05 03:51:23.507998 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-05 03:51:23.508010 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-05 03:51:23.508043 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-05 03:51:23.508055 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-05 03:51:23.508067 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-05 03:51:23.508083 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-05 03:51:23.508095 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-05 03:51:23.508106 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-05 03:51:23.508117 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-05 03:51:23.508136 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-05 03:51:23.508148 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-05 03:51:23.508159 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-05T03:49:50.000000 | 2026-02-05 03:51:23.508178 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-05 03:51:23.508191 | orchestrator | | accessIPv4 | | 2026-02-05 03:51:23.508202 | orchestrator | | accessIPv6 | | 2026-02-05 03:51:23.508219 | orchestrator | | addresses | test=192.168.112.143, 192.168.200.17 | 2026-02-05 03:51:23.508278 | orchestrator | | config_drive | | 2026-02-05 03:51:23.508290 | orchestrator | | created | 2026-02-05T03:49:25Z | 2026-02-05 03:51:23.508309 | orchestrator | | description | None | 2026-02-05 03:51:23.508322 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-05 03:51:23.508336 | orchestrator | | hostId | 5ceb9a361d8bc27b2c610213b4beb40710504efb067ebae8552c751f | 2026-02-05 03:51:23.508349 | orchestrator | | host_status | None | 2026-02-05 03:51:23.508369 | orchestrator | | id | 5af72c23-5120-410f-bc28-632be77919a3 | 2026-02-05 03:51:23.508382 | orchestrator | | image | N/A (booted from volume) | 2026-02-05 03:51:23.508396 | orchestrator | | key_name | test | 2026-02-05 03:51:23.508414 | orchestrator | | locked | False | 2026-02-05 03:51:23.508428 | orchestrator | | locked_reason | None | 2026-02-05 03:51:23.508440 | orchestrator | | name | test-1 | 2026-02-05 03:51:23.508460 | orchestrator | | pinned_availability_zone | None | 2026-02-05 03:51:23.508473 | orchestrator | | progress | 0 | 2026-02-05 03:51:23.508486 | orchestrator | | project_id | 50cadd20dfba472b842bdcf0431bef14 | 2026-02-05 03:51:23.508499 | orchestrator | | properties | hostname='test-1' | 2026-02-05 03:51:23.508519 | orchestrator | | security_groups | name='icmp' | 2026-02-05 03:51:23.508534 | orchestrator | | | name='ssh' | 2026-02-05 03:51:23.508547 | orchestrator | | server_groups | None | 2026-02-05 03:51:23.508558 | orchestrator | | status | ACTIVE | 2026-02-05 03:51:23.508569 | orchestrator | | tags | test | 2026-02-05 03:51:23.508587 | orchestrator | | trusted_image_certificates | None | 2026-02-05 03:51:23.508598 | orchestrator | | updated | 2026-02-05T03:50:24Z | 2026-02-05 03:51:23.508609 | orchestrator | | user_id | e98a26ca7ccf4ec5bdab2d1b2dae5725 | 2026-02-05 03:51:23.508620 | orchestrator | | volumes_attached | delete_on_termination='True', id='02b58908-7713-486b-a5d9-ffc508595113' | 2026-02-05 03:51:23.512264 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:23.779073 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-05 03:51:26.756648 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:26.756751 | orchestrator | | Field | Value | 2026-02-05 03:51:26.756783 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:26.756797 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-05 03:51:26.756826 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-05 03:51:26.756835 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-05 03:51:26.756844 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-05 03:51:26.756852 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-05 03:51:26.756861 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-05 03:51:26.756884 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-05 03:51:26.756893 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-05 03:51:26.756901 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-05 03:51:26.756910 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-05 03:51:26.756928 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-05 03:51:26.756937 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-05 03:51:26.756946 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-05 03:51:26.756954 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-05 03:51:26.756963 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-05 03:51:26.756971 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-05T03:49:52.000000 | 2026-02-05 03:51:26.756985 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-05 03:51:26.756994 | orchestrator | | accessIPv4 | | 2026-02-05 03:51:26.757002 | orchestrator | | accessIPv6 | | 2026-02-05 03:51:26.757015 | orchestrator | | addresses | test=192.168.112.125, 192.168.200.237 | 2026-02-05 03:51:26.757029 | orchestrator | | config_drive | | 2026-02-05 03:51:26.757037 | orchestrator | | created | 2026-02-05T03:49:26Z | 2026-02-05 03:51:26.757046 | orchestrator | | description | None | 2026-02-05 03:51:26.757054 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-05 03:51:26.757069 | orchestrator | | hostId | eeccea90fe5654abd45882712453d56606a8acaa35ab97cc02b10e66 | 2026-02-05 03:51:26.757084 | orchestrator | | host_status | None | 2026-02-05 03:51:26.757106 | orchestrator | | id | 26ad732b-d299-4c5a-8b40-e7f02fab347b | 2026-02-05 03:51:26.757121 | orchestrator | | image | N/A (booted from volume) | 2026-02-05 03:51:26.757135 | orchestrator | | key_name | test | 2026-02-05 03:51:26.757163 | orchestrator | | locked | False | 2026-02-05 03:51:26.757176 | orchestrator | | locked_reason | None | 2026-02-05 03:51:26.757191 | orchestrator | | name | test-2 | 2026-02-05 03:51:26.757205 | orchestrator | | pinned_availability_zone | None | 2026-02-05 03:51:26.757220 | orchestrator | | progress | 0 | 2026-02-05 03:51:26.757263 | orchestrator | | project_id | 50cadd20dfba472b842bdcf0431bef14 | 2026-02-05 03:51:26.757278 | orchestrator | | properties | hostname='test-2' | 2026-02-05 03:51:26.757301 | orchestrator | | security_groups | name='icmp' | 2026-02-05 03:51:26.757315 | orchestrator | | | name='ssh' | 2026-02-05 03:51:26.757338 | orchestrator | | server_groups | None | 2026-02-05 03:51:26.757351 | orchestrator | | status | ACTIVE | 2026-02-05 03:51:26.757360 | orchestrator | | tags | test | 2026-02-05 03:51:26.757368 | orchestrator | | trusted_image_certificates | None | 2026-02-05 03:51:26.757381 | orchestrator | | updated | 2026-02-05T03:50:24Z | 2026-02-05 03:51:26.757394 | orchestrator | | user_id | e98a26ca7ccf4ec5bdab2d1b2dae5725 | 2026-02-05 03:51:26.757407 | orchestrator | | volumes_attached | delete_on_termination='True', id='ec420eff-198d-4a7c-9e26-2cf2e36f7e36' | 2026-02-05 03:51:26.760620 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:27.042198 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-05 03:51:30.030827 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:30.030939 | orchestrator | | Field | Value | 2026-02-05 03:51:30.030951 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:30.030966 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-05 03:51:30.030974 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-05 03:51:30.030981 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-05 03:51:30.030988 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-05 03:51:30.030995 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-05 03:51:30.031002 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-05 03:51:30.031023 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-05 03:51:30.031036 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-05 03:51:30.031043 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-05 03:51:30.031050 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-05 03:51:30.031059 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-05 03:51:30.031067 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-05 03:51:30.031074 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-05 03:51:30.031079 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-05 03:51:30.031083 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-05 03:51:30.031087 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-05T03:49:53.000000 | 2026-02-05 03:51:30.031095 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-05 03:51:30.031103 | orchestrator | | accessIPv4 | | 2026-02-05 03:51:30.031107 | orchestrator | | accessIPv6 | | 2026-02-05 03:51:30.031111 | orchestrator | | addresses | test=192.168.112.169, 192.168.200.5 | 2026-02-05 03:51:30.031355 | orchestrator | | config_drive | | 2026-02-05 03:51:30.031364 | orchestrator | | created | 2026-02-05T03:49:26Z | 2026-02-05 03:51:30.031369 | orchestrator | | description | None | 2026-02-05 03:51:30.031374 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-05 03:51:30.031379 | orchestrator | | hostId | eeccea90fe5654abd45882712453d56606a8acaa35ab97cc02b10e66 | 2026-02-05 03:51:30.031384 | orchestrator | | host_status | None | 2026-02-05 03:51:30.031398 | orchestrator | | id | ed8f58c9-57a2-44b4-8561-739529bf3fba | 2026-02-05 03:51:30.031406 | orchestrator | | image | N/A (booted from volume) | 2026-02-05 03:51:30.031412 | orchestrator | | key_name | test | 2026-02-05 03:51:30.031417 | orchestrator | | locked | False | 2026-02-05 03:51:30.031422 | orchestrator | | locked_reason | None | 2026-02-05 03:51:30.031427 | orchestrator | | name | test-3 | 2026-02-05 03:51:30.031432 | orchestrator | | pinned_availability_zone | None | 2026-02-05 03:51:30.031437 | orchestrator | | progress | 0 | 2026-02-05 03:51:30.031442 | orchestrator | | project_id | 50cadd20dfba472b842bdcf0431bef14 | 2026-02-05 03:51:30.031451 | orchestrator | | properties | hostname='test-3' | 2026-02-05 03:51:30.031461 | orchestrator | | security_groups | name='icmp' | 2026-02-05 03:51:30.031468 | orchestrator | | | name='ssh' | 2026-02-05 03:51:30.031474 | orchestrator | | server_groups | None | 2026-02-05 03:51:30.031479 | orchestrator | | status | ACTIVE | 2026-02-05 03:51:30.031484 | orchestrator | | tags | test | 2026-02-05 03:51:30.031489 | orchestrator | | trusted_image_certificates | None | 2026-02-05 03:51:30.031494 | orchestrator | | updated | 2026-02-05T03:50:25Z | 2026-02-05 03:51:30.031499 | orchestrator | | user_id | e98a26ca7ccf4ec5bdab2d1b2dae5725 | 2026-02-05 03:51:30.031508 | orchestrator | | volumes_attached | delete_on_termination='True', id='dde2ed99-2d19-41e2-9737-6e8d41b2aa3f' | 2026-02-05 03:51:30.034833 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:30.338467 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-05 03:51:33.376305 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:33.376453 | orchestrator | | Field | Value | 2026-02-05 03:51:33.376483 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:33.376496 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-05 03:51:33.376508 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-05 03:51:33.376519 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-05 03:51:33.376531 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-05 03:51:33.376565 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-05 03:51:33.376577 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-05 03:51:33.376608 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-05 03:51:33.376621 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-05 03:51:33.376638 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-05 03:51:33.376650 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-05 03:51:33.376661 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-05 03:51:33.376672 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-05 03:51:33.376683 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-05 03:51:33.376695 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-05 03:51:33.376714 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-05 03:51:33.376725 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-05T03:49:51.000000 | 2026-02-05 03:51:33.376745 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-05 03:51:33.376763 | orchestrator | | accessIPv4 | | 2026-02-05 03:51:33.376775 | orchestrator | | accessIPv6 | | 2026-02-05 03:51:33.376786 | orchestrator | | addresses | test=192.168.112.148, 192.168.200.79 | 2026-02-05 03:51:33.376797 | orchestrator | | config_drive | | 2026-02-05 03:51:33.376808 | orchestrator | | created | 2026-02-05T03:49:27Z | 2026-02-05 03:51:33.376820 | orchestrator | | description | None | 2026-02-05 03:51:33.376838 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-05 03:51:33.376849 | orchestrator | | hostId | 5ceb9a361d8bc27b2c610213b4beb40710504efb067ebae8552c751f | 2026-02-05 03:51:33.376861 | orchestrator | | host_status | None | 2026-02-05 03:51:33.376879 | orchestrator | | id | 7e3c75c8-21b5-4ac0-86b2-691f80aaedf5 | 2026-02-05 03:51:33.376898 | orchestrator | | image | N/A (booted from volume) | 2026-02-05 03:51:33.376917 | orchestrator | | key_name | test | 2026-02-05 03:51:33.376943 | orchestrator | | locked | False | 2026-02-05 03:51:33.376965 | orchestrator | | locked_reason | None | 2026-02-05 03:51:33.376984 | orchestrator | | name | test-4 | 2026-02-05 03:51:33.377013 | orchestrator | | pinned_availability_zone | None | 2026-02-05 03:51:33.377031 | orchestrator | | progress | 0 | 2026-02-05 03:51:33.377050 | orchestrator | | project_id | 50cadd20dfba472b842bdcf0431bef14 | 2026-02-05 03:51:33.377069 | orchestrator | | properties | hostname='test-4' | 2026-02-05 03:51:33.377099 | orchestrator | | security_groups | name='icmp' | 2026-02-05 03:51:33.377128 | orchestrator | | | name='ssh' | 2026-02-05 03:51:33.377148 | orchestrator | | server_groups | None | 2026-02-05 03:51:33.377163 | orchestrator | | status | ACTIVE | 2026-02-05 03:51:33.377174 | orchestrator | | tags | test | 2026-02-05 03:51:33.377194 | orchestrator | | trusted_image_certificates | None | 2026-02-05 03:51:33.377205 | orchestrator | | updated | 2026-02-05T03:50:26Z | 2026-02-05 03:51:33.377216 | orchestrator | | user_id | e98a26ca7ccf4ec5bdab2d1b2dae5725 | 2026-02-05 03:51:33.377227 | orchestrator | | volumes_attached | delete_on_termination='True', id='97dd7f35-5ae0-4496-b5fc-419abaa80836' | 2026-02-05 03:51:33.380445 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-05 03:51:33.665401 | orchestrator | + server_ping 2026-02-05 03:51:33.666131 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-05 03:51:33.667124 | orchestrator | ++ tr -d '\r' 2026-02-05 03:51:36.553434 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-05 03:51:36.553518 | orchestrator | + ping -c3 192.168.112.148 2026-02-05 03:51:36.565113 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2026-02-05 03:51:36.565184 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=7.20 ms 2026-02-05 03:51:37.561655 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=1.94 ms 2026-02-05 03:51:38.563656 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=2.16 ms 2026-02-05 03:51:38.563792 | orchestrator | 2026-02-05 03:51:38.563820 | orchestrator | --- 192.168.112.148 ping statistics --- 2026-02-05 03:51:38.563844 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-05 03:51:38.563864 | orchestrator | rtt min/avg/max/mdev = 1.935/3.764/7.196/2.428 ms 2026-02-05 03:51:38.564222 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-05 03:51:38.564269 | orchestrator | + ping -c3 192.168.112.143 2026-02-05 03:51:38.577749 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2026-02-05 03:51:38.577868 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=8.96 ms 2026-02-05 03:51:39.572903 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=2.68 ms 2026-02-05 03:51:40.574352 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.48 ms 2026-02-05 03:51:40.574478 | orchestrator | 2026-02-05 03:51:40.574490 | orchestrator | --- 192.168.112.143 ping statistics --- 2026-02-05 03:51:40.574501 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-05 03:51:40.574540 | orchestrator | rtt min/avg/max/mdev = 1.477/4.373/8.963/3.282 ms 2026-02-05 03:51:40.574715 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-05 03:51:40.574732 | orchestrator | + ping -c3 192.168.112.197 2026-02-05 03:51:40.585016 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-02-05 03:51:40.585105 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=5.67 ms 2026-02-05 03:51:41.584421 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.68 ms 2026-02-05 03:51:42.585287 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.81 ms 2026-02-05 03:51:42.585413 | orchestrator | 2026-02-05 03:51:42.585429 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-02-05 03:51:42.585442 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-05 03:51:42.585679 | orchestrator | rtt min/avg/max/mdev = 1.805/3.383/5.671/1.655 ms 2026-02-05 03:51:42.585722 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-05 03:51:42.585738 | orchestrator | + ping -c3 192.168.112.169 2026-02-05 03:51:42.596304 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2026-02-05 03:51:42.596371 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=6.69 ms 2026-02-05 03:51:43.593760 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.01 ms 2026-02-05 03:51:44.595237 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.74 ms 2026-02-05 03:51:44.595345 | orchestrator | 2026-02-05 03:51:44.595357 | orchestrator | --- 192.168.112.169 ping statistics --- 2026-02-05 03:51:44.595363 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-05 03:51:44.595367 | orchestrator | rtt min/avg/max/mdev = 1.744/3.480/6.688/2.270 ms 2026-02-05 03:51:44.595372 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-05 03:51:44.595376 | orchestrator | + ping -c3 192.168.112.125 2026-02-05 03:51:44.606461 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2026-02-05 03:51:44.606559 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=5.88 ms 2026-02-05 03:51:45.604132 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.36 ms 2026-02-05 03:51:46.605155 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=1.61 ms 2026-02-05 03:51:46.605355 | orchestrator | 2026-02-05 03:51:46.605379 | orchestrator | --- 192.168.112.125 ping statistics --- 2026-02-05 03:51:46.605393 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-05 03:51:46.605405 | orchestrator | rtt min/avg/max/mdev = 1.610/3.285/5.883/1.862 ms 2026-02-05 03:51:46.605805 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-05 03:51:47.101995 | orchestrator | ok: Runtime: 0:08:54.606596 2026-02-05 03:51:47.153244 | 2026-02-05 03:51:47.153369 | TASK [Run tempest] 2026-02-05 03:51:47.685770 | orchestrator | skipping: Conditional result was False 2026-02-05 03:51:47.705129 | 2026-02-05 03:51:47.705289 | TASK [Check prometheus alert status] 2026-02-05 03:51:48.240786 | orchestrator | skipping: Conditional result was False 2026-02-05 03:51:48.255747 | 2026-02-05 03:51:48.255888 | PLAY [Upgrade testbed] 2026-02-05 03:51:48.268177 | 2026-02-05 03:51:48.268298 | TASK [Print next ceph version] 2026-02-05 03:51:48.345550 | orchestrator | ok 2026-02-05 03:51:48.355511 | 2026-02-05 03:51:48.355644 | TASK [Print next openstack version] 2026-02-05 03:51:48.423171 | orchestrator | ok 2026-02-05 03:51:48.434915 | 2026-02-05 03:51:48.435033 | TASK [Print next manager version] 2026-02-05 03:51:48.504928 | orchestrator | ok 2026-02-05 03:51:48.514626 | 2026-02-05 03:51:48.514743 | TASK [Set cloud fact (Zuul deployment)] 2026-02-05 03:51:48.558949 | orchestrator | ok 2026-02-05 03:51:48.569326 | 2026-02-05 03:51:48.569432 | TASK [Set cloud fact (local deployment)] 2026-02-05 03:51:48.594211 | orchestrator | skipping: Conditional result was False 2026-02-05 03:51:48.610154 | 2026-02-05 03:51:48.610285 | TASK [Fetch manager address] 2026-02-05 03:51:48.906994 | orchestrator | ok 2026-02-05 03:51:48.921521 | 2026-02-05 03:51:48.921763 | TASK [Set manager_host address] 2026-02-05 03:51:49.000930 | orchestrator | ok 2026-02-05 03:51:49.012441 | 2026-02-05 03:51:49.012564 | TASK [Run upgrade] 2026-02-05 03:51:49.699172 | orchestrator | + set -e 2026-02-05 03:51:49.699400 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-05 03:51:49.699429 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-05 03:51:49.699452 | orchestrator | + CEPH_VERSION=reef 2026-02-05 03:51:49.699467 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-05 03:51:49.699482 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-05 03:51:49.699506 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-05 03:51:49.707813 | orchestrator | + set -e 2026-02-05 03:51:49.707891 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 03:51:49.708553 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 03:51:49.708578 | orchestrator | ++ INTERACTIVE=false 2026-02-05 03:51:49.708583 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 03:51:49.708593 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 03:51:49.710109 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-05 03:51:49.756116 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-05 03:51:49.757170 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-05 03:51:49.800531 | orchestrator | 2026-02-05 03:51:49.800614 | orchestrator | # UPGRADE MANAGER 2026-02-05 03:51:49.800627 | orchestrator | 2026-02-05 03:51:49.800632 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-05 03:51:49.800638 | orchestrator | + echo 2026-02-05 03:51:49.800643 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-05 03:51:49.800650 | orchestrator | + echo 2026-02-05 03:51:49.800655 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-05 03:51:49.800661 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-05 03:51:49.800666 | orchestrator | + CEPH_VERSION=reef 2026-02-05 03:51:49.800671 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-05 03:51:49.800676 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-05 03:51:49.800681 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-05 03:51:49.809843 | orchestrator | + set -e 2026-02-05 03:51:49.809944 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-05 03:51:49.809958 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-05 03:51:49.818897 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-05 03:51:49.818970 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-05 03:51:49.821954 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-05 03:51:49.827298 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-05 03:51:49.836111 | orchestrator | /opt/configuration ~ 2026-02-05 03:51:49.836174 | orchestrator | + set -e 2026-02-05 03:51:49.836182 | orchestrator | + pushd /opt/configuration 2026-02-05 03:51:49.836188 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 03:51:49.836195 | orchestrator | + source /opt/venv/bin/activate 2026-02-05 03:51:49.837667 | orchestrator | ++ deactivate nondestructive 2026-02-05 03:51:49.837701 | orchestrator | ++ '[' -n '' ']' 2026-02-05 03:51:49.837707 | orchestrator | ++ '[' -n '' ']' 2026-02-05 03:51:49.837712 | orchestrator | ++ hash -r 2026-02-05 03:51:49.837717 | orchestrator | ++ '[' -n '' ']' 2026-02-05 03:51:49.837722 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-05 03:51:49.837727 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-05 03:51:49.837732 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-05 03:51:49.837739 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-05 03:51:49.837788 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-05 03:51:49.837795 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-05 03:51:49.837800 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-05 03:51:49.837806 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 03:51:49.838040 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 03:51:49.838051 | orchestrator | ++ export PATH 2026-02-05 03:51:49.838056 | orchestrator | ++ '[' -n '' ']' 2026-02-05 03:51:49.838061 | orchestrator | ++ '[' -z '' ']' 2026-02-05 03:51:49.838069 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-05 03:51:49.838075 | orchestrator | ++ PS1='(venv) ' 2026-02-05 03:51:49.838083 | orchestrator | ++ export PS1 2026-02-05 03:51:49.838091 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-05 03:51:49.838099 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-05 03:51:49.838170 | orchestrator | ++ hash -r 2026-02-05 03:51:49.838182 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-05 03:51:50.912062 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-05 03:51:50.913122 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-05 03:51:50.914561 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-05 03:51:50.915861 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-05 03:51:50.917069 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-05 03:51:50.928936 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-05 03:51:50.930299 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-05 03:51:50.931324 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-05 03:51:50.932707 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-05 03:51:50.968690 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-05 03:51:50.969891 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-05 03:51:50.971907 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-05 03:51:50.973170 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-05 03:51:50.977211 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-05 03:51:51.226582 | orchestrator | ++ which gilt 2026-02-05 03:51:51.228576 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-05 03:51:51.228652 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-05 03:51:51.442563 | orchestrator | osism.cfg-generics: 2026-02-05 03:51:51.551882 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-05 03:51:51.553219 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-05 03:51:51.554295 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-05 03:51:51.554318 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-05 03:51:52.448151 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-05 03:51:52.462293 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-05 03:51:52.941054 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-05 03:51:52.997528 | orchestrator | ~ 2026-02-05 03:51:52.997679 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 03:51:52.997708 | orchestrator | + deactivate 2026-02-05 03:51:52.997725 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-05 03:51:52.997743 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 03:51:52.997760 | orchestrator | + export PATH 2026-02-05 03:51:52.997777 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-05 03:51:52.997795 | orchestrator | + '[' -n '' ']' 2026-02-05 03:51:52.997811 | orchestrator | + hash -r 2026-02-05 03:51:52.997827 | orchestrator | + '[' -n '' ']' 2026-02-05 03:51:52.997843 | orchestrator | + unset VIRTUAL_ENV 2026-02-05 03:51:52.997861 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-05 03:51:52.997877 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-05 03:51:52.997894 | orchestrator | + unset -f deactivate 2026-02-05 03:51:52.997910 | orchestrator | + popd 2026-02-05 03:51:52.998844 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-05 03:51:52.998901 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-05 03:51:53.003928 | orchestrator | + set -e 2026-02-05 03:51:53.003989 | orchestrator | + NAMESPACE=kolla/release 2026-02-05 03:51:53.004000 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-05 03:51:53.016243 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-05 03:51:53.021010 | orchestrator | /opt/configuration ~ 2026-02-05 03:51:53.021068 | orchestrator | + set -e 2026-02-05 03:51:53.021081 | orchestrator | + pushd /opt/configuration 2026-02-05 03:51:53.021092 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 03:51:53.021104 | orchestrator | + source /opt/venv/bin/activate 2026-02-05 03:51:53.021115 | orchestrator | ++ deactivate nondestructive 2026-02-05 03:51:53.021126 | orchestrator | ++ '[' -n '' ']' 2026-02-05 03:51:53.021138 | orchestrator | ++ '[' -n '' ']' 2026-02-05 03:51:53.021148 | orchestrator | ++ hash -r 2026-02-05 03:51:53.021168 | orchestrator | ++ '[' -n '' ']' 2026-02-05 03:51:53.021179 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-05 03:51:53.021190 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-05 03:51:53.021201 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-05 03:51:53.021212 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-05 03:51:53.021223 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-05 03:51:53.021233 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-05 03:51:53.021249 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-05 03:51:53.021279 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 03:51:53.021293 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 03:51:53.022096 | orchestrator | ++ export PATH 2026-02-05 03:51:53.022115 | orchestrator | ++ '[' -n '' ']' 2026-02-05 03:51:53.022128 | orchestrator | ++ '[' -z '' ']' 2026-02-05 03:51:53.022140 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-05 03:51:53.022152 | orchestrator | ++ PS1='(venv) ' 2026-02-05 03:51:53.022163 | orchestrator | ++ export PS1 2026-02-05 03:51:53.022176 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-05 03:51:53.022188 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-05 03:51:53.022200 | orchestrator | ++ hash -r 2026-02-05 03:51:53.022212 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-05 03:51:53.626092 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-05 03:51:53.627108 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-05 03:51:53.628399 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-05 03:51:53.629618 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-05 03:51:53.630844 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-05 03:51:53.640923 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-05 03:51:53.642346 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-05 03:51:53.643569 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-05 03:51:53.644740 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-05 03:51:53.678186 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-05 03:51:53.679506 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-05 03:51:53.681307 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-05 03:51:53.682500 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-05 03:51:53.686492 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-05 03:51:53.909159 | orchestrator | ++ which gilt 2026-02-05 03:51:53.912916 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-05 03:51:53.912973 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-05 03:51:54.100900 | orchestrator | osism.cfg-generics: 2026-02-05 03:51:54.171792 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-05 03:51:54.172408 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-05 03:51:54.173053 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-05 03:51:54.173113 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-05 03:51:54.786501 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-05 03:51:54.798641 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-05 03:51:55.148553 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-05 03:51:55.205796 | orchestrator | ~ 2026-02-05 03:51:55.205905 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 03:51:55.205927 | orchestrator | + deactivate 2026-02-05 03:51:55.205967 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-05 03:51:55.205985 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 03:51:55.205999 | orchestrator | + export PATH 2026-02-05 03:51:55.206014 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-05 03:51:55.206068 | orchestrator | + '[' -n '' ']' 2026-02-05 03:51:55.206083 | orchestrator | + hash -r 2026-02-05 03:51:55.206097 | orchestrator | + '[' -n '' ']' 2026-02-05 03:51:55.206112 | orchestrator | + unset VIRTUAL_ENV 2026-02-05 03:51:55.206126 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-05 03:51:55.206139 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-05 03:51:55.206153 | orchestrator | + unset -f deactivate 2026-02-05 03:51:55.206168 | orchestrator | + popd 2026-02-05 03:51:55.209126 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-05 03:51:55.254051 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 03:51:55.254238 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-05 03:51:55.330811 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 03:51:55.330891 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-05 03:51:55.338206 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-05 03:51:55.342426 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-05 03:51:55.391161 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-05 03:51:55.392289 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-05 03:51:55.489931 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-05 03:51:55.489999 | orchestrator | ++ echo true 2026-02-05 03:51:55.490888 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-05 03:51:55.492356 | orchestrator | +++ semver 2024.2 2024.2 2026-02-05 03:51:55.560455 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-05 03:51:55.560760 | orchestrator | +++ semver 2024.2 2025.1 2026-02-05 03:51:55.609924 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-05 03:51:55.610004 | orchestrator | ++ echo false 2026-02-05 03:51:55.610697 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-05 03:51:55.610713 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-05 03:51:55.610720 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-05 03:51:55.610726 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-05 03:51:55.610735 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-05 03:51:55.615839 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-05 03:51:55.615893 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-05 03:51:55.637886 | orchestrator | export RABBITMQ3TO4=true 2026-02-05 03:51:55.641380 | orchestrator | + osism update manager 2026-02-05 03:52:01.445020 | orchestrator | Collecting uv 2026-02-05 03:52:01.552594 | orchestrator | Downloading uv-0.9.30-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-05 03:52:01.570598 | orchestrator | Downloading uv-0.9.30-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (22.8 MB) 2026-02-05 03:52:02.607826 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 22.8/22.8 MB 24.5 MB/s eta 0:00:00 2026-02-05 03:52:02.662688 | orchestrator | Installing collected packages: uv 2026-02-05 03:52:03.110360 | orchestrator | Successfully installed uv-0.9.30 2026-02-05 03:52:03.885087 | orchestrator | Resolved 11 packages in 428ms 2026-02-05 03:52:03.928437 | orchestrator | Downloading ansible (54.5MiB) 2026-02-05 03:52:03.928529 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-05 03:52:03.929329 | orchestrator | Downloading cryptography (4.2MiB) 2026-02-05 03:52:03.932332 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-05 03:52:04.368200 | orchestrator | Downloaded netaddr 2026-02-05 03:52:04.505447 | orchestrator | Downloaded ansible-core 2026-02-05 03:52:04.514723 | orchestrator | Downloaded cryptography 2026-02-05 03:52:10.617528 | orchestrator | Downloaded ansible 2026-02-05 03:52:10.617645 | orchestrator | Prepared 11 packages in 6.73s 2026-02-05 03:52:11.070419 | orchestrator | Installed 11 packages in 450ms 2026-02-05 03:52:11.070495 | orchestrator | + ansible==11.11.0 2026-02-05 03:52:11.070508 | orchestrator | + ansible-core==2.18.13 2026-02-05 03:52:11.070518 | orchestrator | + cffi==2.0.0 2026-02-05 03:52:11.070528 | orchestrator | + cryptography==46.0.4 2026-02-05 03:52:11.070537 | orchestrator | + jinja2==3.1.6 2026-02-05 03:52:11.070546 | orchestrator | + markupsafe==3.0.3 2026-02-05 03:52:11.070554 | orchestrator | + netaddr==1.3.0 2026-02-05 03:52:11.070563 | orchestrator | + packaging==26.0 2026-02-05 03:52:11.070571 | orchestrator | + pycparser==3.0 2026-02-05 03:52:11.070579 | orchestrator | + pyyaml==6.0.3 2026-02-05 03:52:11.070588 | orchestrator | + resolvelib==1.0.1 2026-02-05 03:52:12.187402 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-197955__4be9nc/tmpqz4d53ux/ansible-collection-servicesuqs5iju4'... 2026-02-05 03:52:13.491991 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-05 03:52:13.492065 | orchestrator | Already on 'main' 2026-02-05 03:52:13.984554 | orchestrator | Starting galaxy collection install process 2026-02-05 03:52:13.984666 | orchestrator | Process install dependency map 2026-02-05 03:52:13.984691 | orchestrator | Starting collection install process 2026-02-05 03:52:13.984711 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-05 03:52:13.984723 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-05 03:52:13.984734 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-05 03:52:14.483852 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-19798170b8l4zd/tmpadn3jw8k/ansible-playbooks-manageroxuq2t5s'... 2026-02-05 03:52:15.062346 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-05 03:52:15.062451 | orchestrator | Already on 'main' 2026-02-05 03:52:15.332620 | orchestrator | Starting galaxy collection install process 2026-02-05 03:52:15.332726 | orchestrator | Process install dependency map 2026-02-05 03:52:15.332744 | orchestrator | Starting collection install process 2026-02-05 03:52:15.332755 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-05 03:52:15.332768 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-05 03:52:15.332779 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-05 03:52:16.025917 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-05 03:52:16.026087 | orchestrator | -vvvv to see details 2026-02-05 03:52:16.456139 | orchestrator | 2026-02-05 03:52:16.456259 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-05 03:52:16.456305 | orchestrator | 2026-02-05 03:52:16.456319 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 03:52:20.521149 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:20.521229 | orchestrator | 2026-02-05 03:52:20.521239 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-05 03:52:20.592538 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 03:52:20.592612 | orchestrator | 2026-02-05 03:52:20.592635 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-05 03:52:22.358569 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:22.358676 | orchestrator | 2026-02-05 03:52:22.358695 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-05 03:52:22.422357 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:22.422446 | orchestrator | 2026-02-05 03:52:22.422458 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-05 03:52:22.497173 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-05 03:52:22.497272 | orchestrator | 2026-02-05 03:52:22.497398 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-05 03:52:26.771158 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-05 03:52:26.771245 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-05 03:52:26.771255 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-05 03:52:26.771273 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-05 03:52:26.771279 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-05 03:52:26.771346 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-05 03:52:26.771354 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-05 03:52:26.771360 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-05 03:52:26.771367 | orchestrator | 2026-02-05 03:52:26.771374 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-05 03:52:27.933032 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:27.933112 | orchestrator | 2026-02-05 03:52:27.933121 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-05 03:52:28.899483 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:28.899590 | orchestrator | 2026-02-05 03:52:28.899606 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-05 03:52:28.999476 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-05 03:52:28.999561 | orchestrator | 2026-02-05 03:52:28.999572 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-05 03:52:30.881219 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-05 03:52:30.881393 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-05 03:52:30.881419 | orchestrator | 2026-02-05 03:52:30.881441 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-05 03:52:31.820708 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:31.820807 | orchestrator | 2026-02-05 03:52:31.820823 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-05 03:52:31.887932 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:52:31.888013 | orchestrator | 2026-02-05 03:52:31.888025 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-05 03:52:31.978554 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-05 03:52:31.978632 | orchestrator | 2026-02-05 03:52:31.978642 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-05 03:52:32.946923 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:32.947008 | orchestrator | 2026-02-05 03:52:32.947018 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-05 03:52:33.019660 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-05 03:52:33.019791 | orchestrator | 2026-02-05 03:52:33.019821 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-05 03:52:34.873965 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-05 03:52:34.874126 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-05 03:52:34.874144 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:34.874159 | orchestrator | 2026-02-05 03:52:34.874171 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-05 03:52:35.856898 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:35.856986 | orchestrator | 2026-02-05 03:52:35.856999 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-05 03:52:35.904208 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:52:35.904276 | orchestrator | 2026-02-05 03:52:35.904283 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-05 03:52:36.002230 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-05 03:52:36.002368 | orchestrator | 2026-02-05 03:52:36.002385 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-05 03:52:37.750670 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:37.750778 | orchestrator | 2026-02-05 03:52:37.750796 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-05 03:52:38.339499 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:38.339610 | orchestrator | 2026-02-05 03:52:38.339627 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-05 03:52:40.215248 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-05 03:52:40.215384 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-05 03:52:40.215400 | orchestrator | 2026-02-05 03:52:40.215412 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-05 03:52:41.356850 | orchestrator | changed: [testbed-manager] 2026-02-05 03:52:41.357001 | orchestrator | 2026-02-05 03:52:41.357047 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-05 03:52:41.928559 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:41.928633 | orchestrator | 2026-02-05 03:52:41.928640 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-05 03:52:42.474912 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:42.475016 | orchestrator | 2026-02-05 03:52:42.475050 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-05 03:52:42.529634 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:52:42.529738 | orchestrator | 2026-02-05 03:52:42.529754 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-05 03:52:42.595955 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-05 03:52:42.596050 | orchestrator | 2026-02-05 03:52:42.596075 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-05 03:52:42.666791 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:42.666914 | orchestrator | 2026-02-05 03:52:42.666939 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-05 03:52:45.617542 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-05 03:52:45.617632 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-05 03:52:45.617642 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-05 03:52:45.617648 | orchestrator | 2026-02-05 03:52:45.617656 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-05 03:52:46.596795 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:46.596905 | orchestrator | 2026-02-05 03:52:46.596918 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-05 03:52:47.624759 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:47.624865 | orchestrator | 2026-02-05 03:52:47.624884 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-05 03:52:48.669771 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:48.669851 | orchestrator | 2026-02-05 03:52:48.669860 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-05 03:52:48.761138 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-05 03:52:48.761231 | orchestrator | 2026-02-05 03:52:48.761249 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-05 03:52:48.818755 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:48.818840 | orchestrator | 2026-02-05 03:52:48.818850 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-05 03:52:49.844189 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-05 03:52:49.844272 | orchestrator | 2026-02-05 03:52:49.844282 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-05 03:52:49.918427 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-05 03:52:49.918506 | orchestrator | 2026-02-05 03:52:49.918518 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-05 03:52:50.899766 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:50.899847 | orchestrator | 2026-02-05 03:52:50.899856 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-05 03:52:52.071995 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:52.072107 | orchestrator | 2026-02-05 03:52:52.072125 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-05 03:52:52.158242 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:52:52.158401 | orchestrator | 2026-02-05 03:52:52.158420 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-05 03:52:52.224667 | orchestrator | ok: [testbed-manager] 2026-02-05 03:52:52.224755 | orchestrator | 2026-02-05 03:52:52.224772 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-05 03:52:53.603020 | orchestrator | changed: [testbed-manager] 2026-02-05 03:52:53.603119 | orchestrator | 2026-02-05 03:52:53.603135 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-05 03:54:00.404719 | orchestrator | changed: [testbed-manager] 2026-02-05 03:54:00.404835 | orchestrator | 2026-02-05 03:54:00.404852 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-05 03:54:01.521295 | orchestrator | ok: [testbed-manager] 2026-02-05 03:54:01.521421 | orchestrator | 2026-02-05 03:54:01.521436 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-05 03:54:01.574141 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:54:01.574284 | orchestrator | 2026-02-05 03:54:01.574304 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-05 03:54:02.421750 | orchestrator | ok: [testbed-manager] 2026-02-05 03:54:02.421910 | orchestrator | 2026-02-05 03:54:02.421940 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-05 03:54:02.505705 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:54:02.505806 | orchestrator | 2026-02-05 03:54:02.505821 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-05 03:54:02.505834 | orchestrator | 2026-02-05 03:54:02.505846 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-05 03:54:17.453345 | orchestrator | changed: [testbed-manager] 2026-02-05 03:54:17.453507 | orchestrator | 2026-02-05 03:54:17.453526 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-05 03:55:17.533393 | orchestrator | Pausing for 60 seconds 2026-02-05 03:55:17.533618 | orchestrator | changed: [testbed-manager] 2026-02-05 03:55:17.533652 | orchestrator | 2026-02-05 03:55:17.533675 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-05 03:55:17.579040 | orchestrator | ok: [testbed-manager] 2026-02-05 03:55:17.579133 | orchestrator | 2026-02-05 03:55:17.579149 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-05 03:55:20.980549 | orchestrator | changed: [testbed-manager] 2026-02-05 03:55:20.980644 | orchestrator | 2026-02-05 03:55:20.980655 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-05 03:56:23.577877 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-05 03:56:23.577972 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-05 03:56:23.577983 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-05 03:56:23.577992 | orchestrator | changed: [testbed-manager] 2026-02-05 03:56:23.578002 | orchestrator | 2026-02-05 03:56:23.578011 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-05 03:56:35.324847 | orchestrator | changed: [testbed-manager] 2026-02-05 03:56:35.324971 | orchestrator | 2026-02-05 03:56:35.324997 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-05 03:56:35.409942 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-05 03:56:35.410121 | orchestrator | 2026-02-05 03:56:35.410142 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-05 03:56:35.410153 | orchestrator | 2026-02-05 03:56:35.410163 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-05 03:56:35.466416 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:56:35.466515 | orchestrator | 2026-02-05 03:56:35.466531 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-05 03:56:35.552181 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-05 03:56:35.552271 | orchestrator | 2026-02-05 03:56:35.552302 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-05 03:56:36.683129 | orchestrator | changed: [testbed-manager] 2026-02-05 03:56:36.683235 | orchestrator | 2026-02-05 03:56:36.683254 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-05 03:56:40.364346 | orchestrator | ok: [testbed-manager] 2026-02-05 03:56:40.364420 | orchestrator | 2026-02-05 03:56:40.364428 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-05 03:56:40.446830 | orchestrator | ok: [testbed-manager] => { 2026-02-05 03:56:40.446905 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-05 03:56:40.446913 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-05 03:56:40.446919 | orchestrator | "Checking running containers against expected versions...", 2026-02-05 03:56:40.446926 | orchestrator | "", 2026-02-05 03:56:40.446932 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-05 03:56:40.446938 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-05 03:56:40.446944 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.446950 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-05 03:56:40.446955 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.446960 | orchestrator | "", 2026-02-05 03:56:40.446966 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-05 03:56:40.446972 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-05 03:56:40.446978 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.446983 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-05 03:56:40.446988 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.446993 | orchestrator | "", 2026-02-05 03:56:40.446999 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-05 03:56:40.447004 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-05 03:56:40.447010 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447015 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-05 03:56:40.447022 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447030 | orchestrator | "", 2026-02-05 03:56:40.447039 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-05 03:56:40.447047 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-05 03:56:40.447056 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447064 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-05 03:56:40.447071 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447079 | orchestrator | "", 2026-02-05 03:56:40.447088 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-05 03:56:40.447097 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-05 03:56:40.447106 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447114 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-05 03:56:40.447123 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447131 | orchestrator | "", 2026-02-05 03:56:40.447140 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-05 03:56:40.447164 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-05 03:56:40.447169 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447175 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-05 03:56:40.447180 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447185 | orchestrator | "", 2026-02-05 03:56:40.447190 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-05 03:56:40.447195 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-05 03:56:40.447200 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447205 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-05 03:56:40.447211 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447216 | orchestrator | "", 2026-02-05 03:56:40.447221 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-05 03:56:40.447226 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-05 03:56:40.447231 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447244 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-05 03:56:40.447249 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447254 | orchestrator | "", 2026-02-05 03:56:40.447259 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-05 03:56:40.447265 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-05 03:56:40.447270 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447275 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-05 03:56:40.447280 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447285 | orchestrator | "", 2026-02-05 03:56:40.447293 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-05 03:56:40.447299 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-05 03:56:40.447308 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447320 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-05 03:56:40.447330 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447337 | orchestrator | "", 2026-02-05 03:56:40.447346 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-05 03:56:40.447353 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-05 03:56:40.447361 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447370 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-05 03:56:40.447378 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447385 | orchestrator | "", 2026-02-05 03:56:40.447392 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-05 03:56:40.447400 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-05 03:56:40.447407 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447415 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-05 03:56:40.447422 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447430 | orchestrator | "", 2026-02-05 03:56:40.447439 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-05 03:56:40.447447 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-05 03:56:40.447456 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447464 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-05 03:56:40.447471 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447480 | orchestrator | "", 2026-02-05 03:56:40.447487 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-05 03:56:40.447496 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-05 03:56:40.447504 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447512 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-05 03:56:40.447539 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447547 | orchestrator | "", 2026-02-05 03:56:40.447588 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-05 03:56:40.447599 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-05 03:56:40.447612 | orchestrator | " Enabled: true", 2026-02-05 03:56:40.447617 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-05 03:56:40.447623 | orchestrator | " Status: ✅ MATCH", 2026-02-05 03:56:40.447628 | orchestrator | "", 2026-02-05 03:56:40.447633 | orchestrator | "=== Summary ===", 2026-02-05 03:56:40.447638 | orchestrator | "Errors (version mismatches): 0", 2026-02-05 03:56:40.447643 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-05 03:56:40.447648 | orchestrator | "", 2026-02-05 03:56:40.447654 | orchestrator | "✅ All running containers match expected versions!" 2026-02-05 03:56:40.447659 | orchestrator | ] 2026-02-05 03:56:40.447664 | orchestrator | } 2026-02-05 03:56:40.447670 | orchestrator | 2026-02-05 03:56:40.447675 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-05 03:56:40.514728 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:56:40.514799 | orchestrator | 2026-02-05 03:56:40.514806 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:56:40.514812 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-05 03:56:40.514816 | orchestrator | 2026-02-05 03:56:53.135777 | orchestrator | 2026-02-05 03:56:53 | INFO  | Task 019f7757-81ec-4064-8294-c5d315053803 (sync inventory) is running in background. Output coming soon. 2026-02-05 03:57:22.317846 | orchestrator | 2026-02-05 03:56:54 | INFO  | Starting group_vars file reorganization 2026-02-05 03:57:22.317931 | orchestrator | 2026-02-05 03:56:54 | INFO  | Moved 0 file(s) to their respective directories 2026-02-05 03:57:22.317940 | orchestrator | 2026-02-05 03:56:54 | INFO  | Group_vars file reorganization completed 2026-02-05 03:57:22.317962 | orchestrator | 2026-02-05 03:56:57 | INFO  | Starting variable preparation from inventory 2026-02-05 03:57:22.317969 | orchestrator | 2026-02-05 03:57:00 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-05 03:57:22.317975 | orchestrator | 2026-02-05 03:57:00 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-05 03:57:22.317981 | orchestrator | 2026-02-05 03:57:00 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-05 03:57:22.317986 | orchestrator | 2026-02-05 03:57:00 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-05 03:57:22.317992 | orchestrator | 2026-02-05 03:57:00 | INFO  | Variable preparation completed 2026-02-05 03:57:22.317998 | orchestrator | 2026-02-05 03:57:02 | INFO  | Starting inventory overwrite handling 2026-02-05 03:57:22.318003 | orchestrator | 2026-02-05 03:57:02 | INFO  | Handling group overwrites in 99-overwrite 2026-02-05 03:57:22.318009 | orchestrator | 2026-02-05 03:57:02 | INFO  | Removing group frr:children from 60-generic 2026-02-05 03:57:22.318057 | orchestrator | 2026-02-05 03:57:02 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-05 03:57:22.318063 | orchestrator | 2026-02-05 03:57:02 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-05 03:57:22.318068 | orchestrator | 2026-02-05 03:57:02 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-05 03:57:22.318075 | orchestrator | 2026-02-05 03:57:02 | INFO  | Handling group overwrites in 20-roles 2026-02-05 03:57:22.318084 | orchestrator | 2026-02-05 03:57:02 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-05 03:57:22.318094 | orchestrator | 2026-02-05 03:57:02 | INFO  | Removed 5 group(s) in total 2026-02-05 03:57:22.318104 | orchestrator | 2026-02-05 03:57:02 | INFO  | Inventory overwrite handling completed 2026-02-05 03:57:22.318113 | orchestrator | 2026-02-05 03:57:03 | INFO  | Starting merge of inventory files 2026-02-05 03:57:22.318121 | orchestrator | 2026-02-05 03:57:03 | INFO  | Inventory files merged successfully 2026-02-05 03:57:22.318153 | orchestrator | 2026-02-05 03:57:08 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-05 03:57:22.318162 | orchestrator | 2026-02-05 03:57:20 | INFO  | Successfully wrote ClusterShell configuration 2026-02-05 03:57:22.646126 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-05 03:57:22.646227 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-05 03:57:22.646243 | orchestrator | + local max_attempts=60 2026-02-05 03:57:22.646265 | orchestrator | + local name=kolla-ansible 2026-02-05 03:57:22.646285 | orchestrator | + local attempt_num=1 2026-02-05 03:57:22.646955 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-05 03:57:22.676796 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 03:57:22.676885 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-05 03:57:22.676895 | orchestrator | + local max_attempts=60 2026-02-05 03:57:22.676901 | orchestrator | + local name=osism-ansible 2026-02-05 03:57:22.676905 | orchestrator | + local attempt_num=1 2026-02-05 03:57:22.678119 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-05 03:57:22.720387 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 03:57:22.720466 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-05 03:57:22.908866 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-05 03:57:22.908951 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-05 03:57:22.908960 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-05 03:57:22.908965 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-05 03:57:22.908972 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-05 03:57:22.908976 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-05 03:57:22.908980 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-05 03:57:22.908984 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-02-05 03:57:22.908988 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 20 seconds ago 2026-02-05 03:57:22.908992 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-05 03:57:22.908996 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-05 03:57:22.909000 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-05 03:57:22.909003 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-05 03:57:22.909024 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-05 03:57:22.909029 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-05 03:57:22.909032 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-05 03:57:22.916113 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-05 03:57:22.916191 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-05 03:57:22.916202 | orchestrator | + osism apply facts 2026-02-05 03:57:35.100275 | orchestrator | 2026-02-05 03:57:35 | INFO  | Task f59a22c1-b10d-4ecc-8be7-b2c95bdae39b (facts) was prepared for execution. 2026-02-05 03:57:35.100405 | orchestrator | 2026-02-05 03:57:35 | INFO  | It takes a moment until task f59a22c1-b10d-4ecc-8be7-b2c95bdae39b (facts) has been started and output is visible here. 2026-02-05 03:57:58.549578 | orchestrator | 2026-02-05 03:57:58.549700 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-05 03:57:58.549709 | orchestrator | 2026-02-05 03:57:58.549714 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-05 03:57:58.549718 | orchestrator | Thursday 05 February 2026 03:57:41 +0000 (0:00:02.262) 0:00:02.262 ***** 2026-02-05 03:57:58.549722 | orchestrator | ok: [testbed-manager] 2026-02-05 03:57:58.549727 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:57:58.549732 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:57:58.549736 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:57:58.549740 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:57:58.549743 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:57:58.549747 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:57:58.549751 | orchestrator | 2026-02-05 03:57:58.549755 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-05 03:57:58.549758 | orchestrator | Thursday 05 February 2026 03:57:45 +0000 (0:00:03.621) 0:00:05.884 ***** 2026-02-05 03:57:58.549762 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:57:58.549767 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:57:58.549771 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:57:58.549775 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:57:58.549779 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:57:58.549782 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:57:58.549786 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:57:58.549790 | orchestrator | 2026-02-05 03:57:58.549793 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 03:57:58.549797 | orchestrator | 2026-02-05 03:57:58.549801 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 03:57:58.549805 | orchestrator | Thursday 05 February 2026 03:57:48 +0000 (0:00:02.703) 0:00:08.587 ***** 2026-02-05 03:57:58.549808 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:57:58.549827 | orchestrator | ok: [testbed-manager] 2026-02-05 03:57:58.549831 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:57:58.549835 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:57:58.549841 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:57:58.549845 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:57:58.549849 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:57:58.549852 | orchestrator | 2026-02-05 03:57:58.549856 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-05 03:57:58.549860 | orchestrator | 2026-02-05 03:57:58.549864 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-05 03:57:58.549868 | orchestrator | Thursday 05 February 2026 03:57:55 +0000 (0:00:07.245) 0:00:15.833 ***** 2026-02-05 03:57:58.549872 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:57:58.549893 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:57:58.549897 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:57:58.549901 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:57:58.549904 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:57:58.549908 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:57:58.549912 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:57:58.549916 | orchestrator | 2026-02-05 03:57:58.549919 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:57:58.549923 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:57:58.549928 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:57:58.549932 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:57:58.549936 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:57:58.549940 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:57:58.549943 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:57:58.549947 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:57:58.549951 | orchestrator | 2026-02-05 03:57:58.549955 | orchestrator | 2026-02-05 03:57:58.549958 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:57:58.549962 | orchestrator | Thursday 05 February 2026 03:57:58 +0000 (0:00:02.621) 0:00:18.454 ***** 2026-02-05 03:57:58.549966 | orchestrator | =============================================================================== 2026-02-05 03:57:58.549969 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.25s 2026-02-05 03:57:58.549974 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.62s 2026-02-05 03:57:58.549977 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.70s 2026-02-05 03:57:58.549981 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.62s 2026-02-05 03:57:58.870292 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-05 03:57:58.960361 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 03:57:58.960829 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-05 03:57:59.009304 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-05 03:57:59.009403 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-05 03:57:59.017803 | orchestrator | + set -e 2026-02-05 03:57:59.017874 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-05 03:57:59.017886 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-05 03:57:59.025088 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-05 03:57:59.032756 | orchestrator | 2026-02-05 03:57:59.032847 | orchestrator | # UPGRADE SERVICES 2026-02-05 03:57:59.032870 | orchestrator | 2026-02-05 03:57:59.032891 | orchestrator | + set -e 2026-02-05 03:57:59.032910 | orchestrator | + echo 2026-02-05 03:57:59.032921 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-05 03:57:59.032933 | orchestrator | + echo 2026-02-05 03:57:59.032944 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 03:57:59.033713 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 03:57:59.033748 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 03:57:59.033760 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 03:57:59.033771 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 03:57:59.033782 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 03:57:59.033800 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 03:57:59.033819 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 03:57:59.033874 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 03:57:59.033897 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 03:57:59.033915 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 03:57:59.033933 | orchestrator | ++ export ARA=false 2026-02-05 03:57:59.033945 | orchestrator | ++ ARA=false 2026-02-05 03:57:59.033957 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 03:57:59.033967 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 03:57:59.033978 | orchestrator | ++ export TEMPEST=false 2026-02-05 03:57:59.033989 | orchestrator | ++ TEMPEST=false 2026-02-05 03:57:59.034000 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 03:57:59.034010 | orchestrator | ++ IS_ZUUL=true 2026-02-05 03:57:59.034104 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 03:57:59.034124 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 03:57:59.034143 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 03:57:59.034155 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 03:57:59.034166 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 03:57:59.034177 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 03:57:59.034188 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 03:57:59.034199 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 03:57:59.034210 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 03:57:59.034221 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 03:57:59.034232 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-05 03:57:59.034243 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-05 03:57:59.034254 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-05 03:57:59.034267 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-05 03:57:59.034280 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-05 03:57:59.039027 | orchestrator | + set -e 2026-02-05 03:57:59.039133 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 03:57:59.040017 | orchestrator | 2026-02-05 03:57:59.040123 | orchestrator | # PULL IMAGES 2026-02-05 03:57:59.040150 | orchestrator | 2026-02-05 03:57:59.040170 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 03:57:59.040190 | orchestrator | ++ INTERACTIVE=false 2026-02-05 03:57:59.040209 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 03:57:59.040228 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 03:57:59.040246 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 03:57:59.040261 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 03:57:59.040272 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 03:57:59.040283 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 03:57:59.040294 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 03:57:59.040305 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 03:57:59.040341 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 03:57:59.040353 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 03:57:59.040365 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 03:57:59.040376 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 03:57:59.040387 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 03:57:59.040398 | orchestrator | ++ export ARA=false 2026-02-05 03:57:59.040409 | orchestrator | ++ ARA=false 2026-02-05 03:57:59.040420 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 03:57:59.040431 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 03:57:59.040442 | orchestrator | ++ export TEMPEST=false 2026-02-05 03:57:59.040453 | orchestrator | ++ TEMPEST=false 2026-02-05 03:57:59.040464 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 03:57:59.040475 | orchestrator | ++ IS_ZUUL=true 2026-02-05 03:57:59.040486 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 03:57:59.040497 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 03:57:59.040508 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 03:57:59.040520 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 03:57:59.040531 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 03:57:59.040542 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 03:57:59.040552 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 03:57:59.040563 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 03:57:59.040574 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 03:57:59.040585 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 03:57:59.040596 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-05 03:57:59.040607 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-05 03:57:59.040649 | orchestrator | + echo 2026-02-05 03:57:59.040670 | orchestrator | + echo '# PULL IMAGES' 2026-02-05 03:57:59.040684 | orchestrator | + echo 2026-02-05 03:57:59.040695 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-05 03:57:59.074791 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 03:57:59.074899 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-05 03:58:01.301958 | orchestrator | 2026-02-05 03:58:01 | INFO  | Trying to run play pull-images in environment custom 2026-02-05 03:58:11.520459 | orchestrator | 2026-02-05 03:58:11 | INFO  | Task 91fc12e7-cc63-447d-9e1d-9aef7013a056 (pull-images) was prepared for execution. 2026-02-05 03:58:11.520569 | orchestrator | 2026-02-05 03:58:11 | INFO  | Task 91fc12e7-cc63-447d-9e1d-9aef7013a056 is running in background. No more output. Check ARA for logs. 2026-02-05 03:58:11.891381 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-05 03:58:11.902174 | orchestrator | + set -e 2026-02-05 03:58:11.902250 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 03:58:11.902263 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 03:58:11.902274 | orchestrator | ++ INTERACTIVE=false 2026-02-05 03:58:11.902289 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 03:58:11.902303 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 03:58:11.902316 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-05 03:58:11.904010 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-05 03:58:11.916533 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-05 03:58:11.916582 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-05 03:58:11.917354 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-05 03:58:11.967595 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-05 03:58:11.967733 | orchestrator | + osism apply frr 2026-02-05 03:58:24.173193 | orchestrator | 2026-02-05 03:58:24 | INFO  | Task c303e5d7-a8ac-4202-b868-b5b5c28ab596 (frr) was prepared for execution. 2026-02-05 03:58:24.173283 | orchestrator | 2026-02-05 03:58:24 | INFO  | It takes a moment until task c303e5d7-a8ac-4202-b868-b5b5c28ab596 (frr) has been started and output is visible here. 2026-02-05 03:58:46.555226 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-05 03:58:46.555333 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-05 03:58:46.555357 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-05 03:58:46.555366 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-05 03:58:46.555384 | orchestrator | 2026-02-05 03:58:46.555444 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-05 03:58:46.555454 | orchestrator | 2026-02-05 03:58:46.555463 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-05 03:58:46.555472 | orchestrator | Thursday 05 February 2026 03:58:32 +0000 (0:00:03.670) 0:00:03.670 ***** 2026-02-05 03:58:46.555482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 03:58:46.555492 | orchestrator | 2026-02-05 03:58:46.555501 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-05 03:58:46.555510 | orchestrator | Thursday 05 February 2026 03:58:33 +0000 (0:00:01.090) 0:00:04.761 ***** 2026-02-05 03:58:46.555520 | orchestrator | ok: [testbed-manager] 2026-02-05 03:58:46.555529 | orchestrator | 2026-02-05 03:58:46.555538 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-05 03:58:46.555548 | orchestrator | Thursday 05 February 2026 03:58:35 +0000 (0:00:01.519) 0:00:06.281 ***** 2026-02-05 03:58:46.555556 | orchestrator | ok: [testbed-manager] 2026-02-05 03:58:46.555565 | orchestrator | 2026-02-05 03:58:46.555574 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-05 03:58:46.555583 | orchestrator | Thursday 05 February 2026 03:58:36 +0000 (0:00:01.860) 0:00:08.142 ***** 2026-02-05 03:58:46.555592 | orchestrator | ok: [testbed-manager] 2026-02-05 03:58:46.555601 | orchestrator | 2026-02-05 03:58:46.555609 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-05 03:58:46.555618 | orchestrator | Thursday 05 February 2026 03:58:37 +0000 (0:00:00.987) 0:00:09.129 ***** 2026-02-05 03:58:46.555701 | orchestrator | ok: [testbed-manager] 2026-02-05 03:58:46.555713 | orchestrator | 2026-02-05 03:58:46.555722 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-05 03:58:46.555731 | orchestrator | Thursday 05 February 2026 03:58:38 +0000 (0:00:00.927) 0:00:10.057 ***** 2026-02-05 03:58:46.555740 | orchestrator | ok: [testbed-manager] 2026-02-05 03:58:46.555748 | orchestrator | 2026-02-05 03:58:46.555758 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-05 03:58:46.555767 | orchestrator | Thursday 05 February 2026 03:58:40 +0000 (0:00:01.449) 0:00:11.506 ***** 2026-02-05 03:58:46.555776 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:58:46.555785 | orchestrator | 2026-02-05 03:58:46.555794 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-05 03:58:46.555803 | orchestrator | Thursday 05 February 2026 03:58:40 +0000 (0:00:00.175) 0:00:11.682 ***** 2026-02-05 03:58:46.555811 | orchestrator | skipping: [testbed-manager] 2026-02-05 03:58:46.555820 | orchestrator | 2026-02-05 03:58:46.555829 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-05 03:58:46.555838 | orchestrator | Thursday 05 February 2026 03:58:40 +0000 (0:00:00.177) 0:00:11.860 ***** 2026-02-05 03:58:46.555846 | orchestrator | ok: [testbed-manager] 2026-02-05 03:58:46.555855 | orchestrator | 2026-02-05 03:58:46.555864 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-05 03:58:46.555872 | orchestrator | Thursday 05 February 2026 03:58:41 +0000 (0:00:00.980) 0:00:12.840 ***** 2026-02-05 03:58:46.555881 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-05 03:58:46.555906 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-05 03:58:46.555917 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-05 03:58:46.555926 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-05 03:58:46.555935 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-05 03:58:46.555944 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-05 03:58:46.555953 | orchestrator | 2026-02-05 03:58:46.555962 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-05 03:58:46.555970 | orchestrator | Thursday 05 February 2026 03:58:44 +0000 (0:00:02.783) 0:00:15.623 ***** 2026-02-05 03:58:46.555979 | orchestrator | ok: [testbed-manager] 2026-02-05 03:58:46.555987 | orchestrator | 2026-02-05 03:58:46.555996 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 03:58:46.556005 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 03:58:46.556014 | orchestrator | 2026-02-05 03:58:46.556022 | orchestrator | 2026-02-05 03:58:46.556031 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 03:58:46.556040 | orchestrator | Thursday 05 February 2026 03:58:46 +0000 (0:00:01.856) 0:00:17.479 ***** 2026-02-05 03:58:46.556052 | orchestrator | =============================================================================== 2026-02-05 03:58:46.556067 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.78s 2026-02-05 03:58:46.556081 | orchestrator | osism.services.frr : Install frr package -------------------------------- 1.86s 2026-02-05 03:58:46.556115 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.86s 2026-02-05 03:58:46.556131 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.52s 2026-02-05 03:58:46.556144 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.45s 2026-02-05 03:58:46.556157 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.09s 2026-02-05 03:58:46.556171 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.99s 2026-02-05 03:58:46.556197 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.98s 2026-02-05 03:58:46.556212 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.93s 2026-02-05 03:58:46.556227 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-02-05 03:58:46.556241 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.18s 2026-02-05 03:58:46.908584 | orchestrator | + osism apply kubernetes 2026-02-05 03:58:49.085196 | orchestrator | 2026-02-05 03:58:49 | INFO  | Task 925a66e5-0c7e-43a7-a855-0fee61908d1b (kubernetes) was prepared for execution. 2026-02-05 03:58:49.085275 | orchestrator | 2026-02-05 03:58:49 | INFO  | It takes a moment until task 925a66e5-0c7e-43a7-a855-0fee61908d1b (kubernetes) has been started and output is visible here. 2026-02-05 03:59:34.545517 | orchestrator | 2026-02-05 03:59:34.545686 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-05 03:59:34.545702 | orchestrator | 2026-02-05 03:59:34.545710 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-05 03:59:34.545718 | orchestrator | Thursday 05 February 2026 03:58:56 +0000 (0:00:02.318) 0:00:02.318 ***** 2026-02-05 03:59:34.545725 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:59:34.545733 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:59:34.545739 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:59:34.545745 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:59:34.545752 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:59:34.545758 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:59:34.545764 | orchestrator | 2026-02-05 03:59:34.545770 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-05 03:59:34.545777 | orchestrator | Thursday 05 February 2026 03:58:59 +0000 (0:00:03.626) 0:00:05.945 ***** 2026-02-05 03:59:34.545783 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:59:34.545790 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:59:34.545796 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:59:34.545802 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:59:34.545809 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:59:34.545815 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:59:34.545821 | orchestrator | 2026-02-05 03:59:34.545827 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-05 03:59:34.545834 | orchestrator | Thursday 05 February 2026 03:59:01 +0000 (0:00:02.121) 0:00:08.066 ***** 2026-02-05 03:59:34.545840 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:59:34.545847 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:59:34.545853 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:59:34.545859 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:59:34.545866 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:59:34.545872 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:59:34.545878 | orchestrator | 2026-02-05 03:59:34.545884 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-05 03:59:34.545890 | orchestrator | Thursday 05 February 2026 03:59:04 +0000 (0:00:02.222) 0:00:10.289 ***** 2026-02-05 03:59:34.545897 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:59:34.545903 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:59:34.545909 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:59:34.545915 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:59:34.545921 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:59:34.545927 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:59:34.545933 | orchestrator | 2026-02-05 03:59:34.545940 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-05 03:59:34.545946 | orchestrator | Thursday 05 February 2026 03:59:07 +0000 (0:00:03.312) 0:00:13.601 ***** 2026-02-05 03:59:34.545952 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:59:34.545958 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:59:34.545964 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:59:34.545970 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:59:34.545996 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:59:34.546002 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:59:34.546008 | orchestrator | 2026-02-05 03:59:34.546075 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-05 03:59:34.546086 | orchestrator | Thursday 05 February 2026 03:59:10 +0000 (0:00:03.091) 0:00:16.693 ***** 2026-02-05 03:59:34.546093 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:59:34.546100 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:59:34.546107 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:59:34.546115 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:59:34.546122 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:59:34.546130 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:59:34.546139 | orchestrator | 2026-02-05 03:59:34.546150 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-05 03:59:34.546160 | orchestrator | Thursday 05 February 2026 03:59:12 +0000 (0:00:02.168) 0:00:18.861 ***** 2026-02-05 03:59:34.546170 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:59:34.546180 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:59:34.546191 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:59:34.546200 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:59:34.546210 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:59:34.546219 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:59:34.546229 | orchestrator | 2026-02-05 03:59:34.546239 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-05 03:59:34.546249 | orchestrator | Thursday 05 February 2026 03:59:14 +0000 (0:00:01.966) 0:00:20.827 ***** 2026-02-05 03:59:34.546260 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:59:34.546271 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:59:34.546281 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:59:34.546292 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:59:34.546313 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:59:34.546324 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:59:34.546334 | orchestrator | 2026-02-05 03:59:34.546345 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-05 03:59:34.546356 | orchestrator | Thursday 05 February 2026 03:59:16 +0000 (0:00:01.780) 0:00:22.608 ***** 2026-02-05 03:59:34.546367 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 03:59:34.546380 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 03:59:34.546390 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:59:34.546402 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 03:59:34.546410 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 03:59:34.546418 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:59:34.546426 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 03:59:34.546433 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 03:59:34.546439 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:59:34.546446 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 03:59:34.546452 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 03:59:34.546458 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:59:34.546482 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 03:59:34.546488 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 03:59:34.546495 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:59:34.546501 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 03:59:34.546507 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 03:59:34.546513 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:59:34.546519 | orchestrator | 2026-02-05 03:59:34.546534 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-05 03:59:34.546540 | orchestrator | Thursday 05 February 2026 03:59:18 +0000 (0:00:01.918) 0:00:24.527 ***** 2026-02-05 03:59:34.546546 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:59:34.546552 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:59:34.546559 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:59:34.546565 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:59:34.546571 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:59:34.546602 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:59:34.546608 | orchestrator | 2026-02-05 03:59:34.546615 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-05 03:59:34.546622 | orchestrator | Thursday 05 February 2026 03:59:20 +0000 (0:00:02.192) 0:00:26.719 ***** 2026-02-05 03:59:34.546628 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:59:34.546634 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:59:34.546641 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:59:34.546647 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:59:34.546653 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:59:34.546659 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:59:34.546665 | orchestrator | 2026-02-05 03:59:34.546671 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-05 03:59:34.546678 | orchestrator | Thursday 05 February 2026 03:59:22 +0000 (0:00:02.078) 0:00:28.798 ***** 2026-02-05 03:59:34.546684 | orchestrator | ok: [testbed-node-5] 2026-02-05 03:59:34.546690 | orchestrator | ok: [testbed-node-2] 2026-02-05 03:59:34.546696 | orchestrator | ok: [testbed-node-0] 2026-02-05 03:59:34.546702 | orchestrator | ok: [testbed-node-4] 2026-02-05 03:59:34.546708 | orchestrator | ok: [testbed-node-3] 2026-02-05 03:59:34.546714 | orchestrator | ok: [testbed-node-1] 2026-02-05 03:59:34.546720 | orchestrator | 2026-02-05 03:59:34.546727 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-05 03:59:34.546733 | orchestrator | Thursday 05 February 2026 03:59:25 +0000 (0:00:02.905) 0:00:31.704 ***** 2026-02-05 03:59:34.546739 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:59:34.546745 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:59:34.546751 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:59:34.546758 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:59:34.546764 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:59:34.546770 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:59:34.546776 | orchestrator | 2026-02-05 03:59:34.546782 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-05 03:59:34.546788 | orchestrator | Thursday 05 February 2026 03:59:27 +0000 (0:00:02.038) 0:00:33.742 ***** 2026-02-05 03:59:34.546795 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:59:34.546801 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:59:34.546807 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:59:34.546813 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:59:34.546819 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:59:34.546825 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:59:34.546831 | orchestrator | 2026-02-05 03:59:34.546838 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-05 03:59:34.546846 | orchestrator | Thursday 05 February 2026 03:59:29 +0000 (0:00:02.205) 0:00:35.948 ***** 2026-02-05 03:59:34.546852 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:59:34.546861 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:59:34.546868 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:59:34.546874 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:59:34.546880 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:59:34.546886 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:59:34.546892 | orchestrator | 2026-02-05 03:59:34.546899 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-05 03:59:34.546905 | orchestrator | Thursday 05 February 2026 03:59:31 +0000 (0:00:01.949) 0:00:37.897 ***** 2026-02-05 03:59:34.546916 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-05 03:59:34.546923 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-05 03:59:34.546929 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:59:34.546935 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-05 03:59:34.546942 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-05 03:59:34.546948 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:59:34.546954 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-05 03:59:34.546960 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-05 03:59:34.546966 | orchestrator | skipping: [testbed-node-5] 2026-02-05 03:59:34.546972 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-05 03:59:34.546978 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-05 03:59:34.546985 | orchestrator | skipping: [testbed-node-0] 2026-02-05 03:59:34.546991 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-05 03:59:34.546997 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-05 03:59:34.547003 | orchestrator | skipping: [testbed-node-1] 2026-02-05 03:59:34.547009 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-05 03:59:34.547028 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-05 03:59:34.547034 | orchestrator | skipping: [testbed-node-2] 2026-02-05 03:59:34.547040 | orchestrator | 2026-02-05 03:59:34.547046 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-05 03:59:34.547053 | orchestrator | Thursday 05 February 2026 03:59:34 +0000 (0:00:02.330) 0:00:40.227 ***** 2026-02-05 03:59:34.547059 | orchestrator | skipping: [testbed-node-3] 2026-02-05 03:59:34.547065 | orchestrator | skipping: [testbed-node-4] 2026-02-05 03:59:34.547077 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:01:09.391030 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:01:09.391118 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:01:09.391128 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:01:09.391136 | orchestrator | 2026-02-05 04:01:09.391145 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-05 04:01:09.391153 | orchestrator | Thursday 05 February 2026 03:59:35 +0000 (0:00:01.732) 0:00:41.960 ***** 2026-02-05 04:01:09.391160 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:01:09.391167 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:01:09.391174 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:01:09.391181 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:01:09.391188 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:01:09.391195 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:01:09.391202 | orchestrator | 2026-02-05 04:01:09.391209 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-05 04:01:09.391216 | orchestrator | 2026-02-05 04:01:09.391224 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-05 04:01:09.391232 | orchestrator | Thursday 05 February 2026 03:59:38 +0000 (0:00:02.699) 0:00:44.659 ***** 2026-02-05 04:01:09.391239 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:01:09.391246 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:01:09.391276 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:01:09.391284 | orchestrator | 2026-02-05 04:01:09.391293 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-05 04:01:09.391300 | orchestrator | Thursday 05 February 2026 03:59:40 +0000 (0:00:01.796) 0:00:46.456 ***** 2026-02-05 04:01:09.391307 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:01:09.391314 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:01:09.391321 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:01:09.391327 | orchestrator | 2026-02-05 04:01:09.391334 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-05 04:01:09.391341 | orchestrator | Thursday 05 February 2026 03:59:42 +0000 (0:00:02.158) 0:00:48.614 ***** 2026-02-05 04:01:09.391362 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:01:09.391369 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:01:09.391376 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:01:09.391383 | orchestrator | 2026-02-05 04:01:09.391389 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-05 04:01:09.391396 | orchestrator | Thursday 05 February 2026 03:59:44 +0000 (0:00:02.152) 0:00:50.767 ***** 2026-02-05 04:01:09.391403 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:01:09.391410 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:01:09.391416 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:01:09.391423 | orchestrator | 2026-02-05 04:01:09.391430 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-05 04:01:09.391436 | orchestrator | Thursday 05 February 2026 03:59:46 +0000 (0:00:01.913) 0:00:52.680 ***** 2026-02-05 04:01:09.391443 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:01:09.391450 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:01:09.391457 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:01:09.391463 | orchestrator | 2026-02-05 04:01:09.391470 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-05 04:01:09.391477 | orchestrator | Thursday 05 February 2026 03:59:47 +0000 (0:00:01.343) 0:00:54.024 ***** 2026-02-05 04:01:09.391484 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:01:09.391491 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:01:09.391498 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:01:09.391504 | orchestrator | 2026-02-05 04:01:09.391511 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-05 04:01:09.391518 | orchestrator | Thursday 05 February 2026 03:59:49 +0000 (0:00:01.714) 0:00:55.739 ***** 2026-02-05 04:01:09.391525 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:01:09.391531 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:01:09.391538 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:01:09.391544 | orchestrator | 2026-02-05 04:01:09.391551 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-05 04:01:09.391558 | orchestrator | Thursday 05 February 2026 03:59:51 +0000 (0:00:02.240) 0:00:57.980 ***** 2026-02-05 04:01:09.391565 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:01:09.391571 | orchestrator | 2026-02-05 04:01:09.391578 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-05 04:01:09.391585 | orchestrator | Thursday 05 February 2026 03:59:53 +0000 (0:00:02.018) 0:00:59.999 ***** 2026-02-05 04:01:09.391643 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:01:09.391662 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:01:09.391670 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:01:09.391678 | orchestrator | 2026-02-05 04:01:09.391686 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-05 04:01:09.391694 | orchestrator | Thursday 05 February 2026 03:59:56 +0000 (0:00:02.471) 0:01:02.470 ***** 2026-02-05 04:01:09.391702 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:01:09.391709 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:01:09.391717 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:01:09.391725 | orchestrator | 2026-02-05 04:01:09.391732 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-05 04:01:09.391740 | orchestrator | Thursday 05 February 2026 03:59:57 +0000 (0:00:01.651) 0:01:04.122 ***** 2026-02-05 04:01:09.391748 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:01:09.391756 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:01:09.391764 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:01:09.391772 | orchestrator | 2026-02-05 04:01:09.391780 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-05 04:01:09.391790 | orchestrator | Thursday 05 February 2026 03:59:59 +0000 (0:00:01.836) 0:01:05.959 ***** 2026-02-05 04:01:09.391802 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:01:09.391813 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:01:09.391831 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:01:09.391850 | orchestrator | 2026-02-05 04:01:09.391860 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-05 04:01:09.391871 | orchestrator | Thursday 05 February 2026 04:00:02 +0000 (0:00:02.543) 0:01:08.502 ***** 2026-02-05 04:01:09.391881 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:01:09.391892 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:01:09.391919 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:01:09.391932 | orchestrator | 2026-02-05 04:01:09.391944 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-05 04:01:09.391955 | orchestrator | Thursday 05 February 2026 04:00:03 +0000 (0:00:01.377) 0:01:09.880 ***** 2026-02-05 04:01:09.391967 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:01:09.391979 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:01:09.391987 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:01:09.391994 | orchestrator | 2026-02-05 04:01:09.392001 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-05 04:01:09.392007 | orchestrator | Thursday 05 February 2026 04:00:05 +0000 (0:00:01.618) 0:01:11.499 ***** 2026-02-05 04:01:09.392014 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:01:09.392021 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:01:09.392028 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:01:09.392034 | orchestrator | 2026-02-05 04:01:09.392041 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-05 04:01:09.392047 | orchestrator | Thursday 05 February 2026 04:00:07 +0000 (0:00:02.168) 0:01:13.667 ***** 2026-02-05 04:01:09.392054 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:01:09.392061 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:01:09.392067 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:01:09.392074 | orchestrator | 2026-02-05 04:01:09.392080 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-05 04:01:09.392087 | orchestrator | Thursday 05 February 2026 04:00:09 +0000 (0:00:01.902) 0:01:15.570 ***** 2026-02-05 04:01:09.392094 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:01:09.392100 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:01:09.392107 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:01:09.392113 | orchestrator | 2026-02-05 04:01:09.392120 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-05 04:01:09.392127 | orchestrator | Thursday 05 February 2026 04:00:10 +0000 (0:00:01.417) 0:01:16.988 ***** 2026-02-05 04:01:09.392134 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-05 04:01:09.392142 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-05 04:01:09.392149 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-05 04:01:09.392156 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-05 04:01:09.392162 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-05 04:01:09.392169 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-05 04:01:09.392176 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-05 04:01:09.392182 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-05 04:01:09.392189 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-05 04:01:09.392196 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-05 04:01:09.392208 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-05 04:01:09.392215 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-05 04:01:09.392221 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-05 04:01:09.392228 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-05 04:01:09.392235 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-05 04:01:09.392246 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:01:09.392257 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:01:09.392272 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:01:09.392286 | orchestrator | 2026-02-05 04:01:09.392303 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-05 04:01:09.392314 | orchestrator | Thursday 05 February 2026 04:01:05 +0000 (0:00:55.032) 0:02:12.020 ***** 2026-02-05 04:01:09.392325 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:01:09.392334 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:01:09.392343 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:01:09.392353 | orchestrator | 2026-02-05 04:01:09.392363 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-05 04:01:09.392374 | orchestrator | Thursday 05 February 2026 04:01:07 +0000 (0:00:01.430) 0:02:13.450 ***** 2026-02-05 04:01:09.392384 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:01:09.392394 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:01:09.392404 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:01:09.392414 | orchestrator | 2026-02-05 04:01:09.392433 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-05 04:02:31.901843 | orchestrator | Thursday 05 February 2026 04:01:09 +0000 (0:00:02.053) 0:02:15.504 ***** 2026-02-05 04:02:31.901933 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:02:31.901944 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:02:31.901951 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:02:31.901958 | orchestrator | 2026-02-05 04:02:31.901966 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-05 04:02:31.901973 | orchestrator | Thursday 05 February 2026 04:01:11 +0000 (0:00:02.207) 0:02:17.711 ***** 2026-02-05 04:02:31.901981 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:02:31.901989 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:02:31.901995 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:02:31.902002 | orchestrator | 2026-02-05 04:02:31.902009 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-05 04:02:31.902061 | orchestrator | Thursday 05 February 2026 04:01:54 +0000 (0:00:42.682) 0:03:00.394 ***** 2026-02-05 04:02:31.902068 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:02:31.902075 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:02:31.902082 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:02:31.902089 | orchestrator | 2026-02-05 04:02:31.902096 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-05 04:02:31.902102 | orchestrator | Thursday 05 February 2026 04:01:56 +0000 (0:00:01.889) 0:03:02.284 ***** 2026-02-05 04:02:31.902109 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:02:31.902129 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:02:31.902136 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:02:31.902143 | orchestrator | 2026-02-05 04:02:31.902149 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-05 04:02:31.902155 | orchestrator | Thursday 05 February 2026 04:01:57 +0000 (0:00:01.646) 0:03:03.931 ***** 2026-02-05 04:02:31.902179 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:02:31.902194 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:02:31.902201 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:02:31.902207 | orchestrator | 2026-02-05 04:02:31.902214 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-05 04:02:31.902220 | orchestrator | Thursday 05 February 2026 04:01:59 +0000 (0:00:02.009) 0:03:05.941 ***** 2026-02-05 04:02:31.902226 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:02:31.902233 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:02:31.902239 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:02:31.902245 | orchestrator | 2026-02-05 04:02:31.902251 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-05 04:02:31.902258 | orchestrator | Thursday 05 February 2026 04:02:01 +0000 (0:00:01.669) 0:03:07.610 ***** 2026-02-05 04:02:31.902264 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:02:31.902270 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:02:31.902276 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:02:31.902282 | orchestrator | 2026-02-05 04:02:31.902289 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-05 04:02:31.902295 | orchestrator | Thursday 05 February 2026 04:02:02 +0000 (0:00:01.333) 0:03:08.944 ***** 2026-02-05 04:02:31.902301 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:02:31.902307 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:02:31.902314 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:02:31.902320 | orchestrator | 2026-02-05 04:02:31.902326 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-05 04:02:31.902332 | orchestrator | Thursday 05 February 2026 04:02:04 +0000 (0:00:01.648) 0:03:10.593 ***** 2026-02-05 04:02:31.902338 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:02:31.902344 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:02:31.902351 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:02:31.902357 | orchestrator | 2026-02-05 04:02:31.902363 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-05 04:02:31.902369 | orchestrator | Thursday 05 February 2026 04:02:06 +0000 (0:00:02.011) 0:03:12.604 ***** 2026-02-05 04:02:31.902375 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:02:31.902382 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:02:31.902389 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:02:31.902397 | orchestrator | 2026-02-05 04:02:31.902404 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-05 04:02:31.902411 | orchestrator | Thursday 05 February 2026 04:02:08 +0000 (0:00:01.834) 0:03:14.439 ***** 2026-02-05 04:02:31.902419 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:02:31.902426 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:02:31.902433 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:02:31.902440 | orchestrator | 2026-02-05 04:02:31.902447 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-05 04:02:31.902455 | orchestrator | Thursday 05 February 2026 04:02:10 +0000 (0:00:01.912) 0:03:16.352 ***** 2026-02-05 04:02:31.902463 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:02:31.902470 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:02:31.902477 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:02:31.902485 | orchestrator | 2026-02-05 04:02:31.902492 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-05 04:02:31.902500 | orchestrator | Thursday 05 February 2026 04:02:11 +0000 (0:00:01.340) 0:03:17.693 ***** 2026-02-05 04:02:31.902507 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:02:31.902514 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:02:31.902522 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:02:31.902529 | orchestrator | 2026-02-05 04:02:31.902536 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-05 04:02:31.902543 | orchestrator | Thursday 05 February 2026 04:02:12 +0000 (0:00:01.366) 0:03:19.059 ***** 2026-02-05 04:02:31.902551 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:02:31.902573 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:02:31.902586 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:02:31.902593 | orchestrator | 2026-02-05 04:02:31.902601 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-05 04:02:31.902623 | orchestrator | Thursday 05 February 2026 04:02:14 +0000 (0:00:01.705) 0:03:20.765 ***** 2026-02-05 04:02:31.902631 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:02:31.902639 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:02:31.902646 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:02:31.902653 | orchestrator | 2026-02-05 04:02:31.902661 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-05 04:02:31.902684 | orchestrator | Thursday 05 February 2026 04:02:16 +0000 (0:00:01.728) 0:03:22.493 ***** 2026-02-05 04:02:31.902696 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-05 04:02:31.902707 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-05 04:02:31.902718 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-05 04:02:31.902728 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-05 04:02:31.902744 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-05 04:02:31.902757 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-05 04:02:31.902768 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-05 04:02:31.902778 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-05 04:02:31.902789 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-05 04:02:31.902799 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-05 04:02:31.902810 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-05 04:02:31.902820 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-05 04:02:31.902830 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-05 04:02:31.902839 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-05 04:02:31.902850 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-05 04:02:31.902860 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-05 04:02:31.902870 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-05 04:02:31.902881 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-05 04:02:31.902888 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-05 04:02:31.902894 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-05 04:02:31.902901 | orchestrator | 2026-02-05 04:02:31.902907 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-05 04:02:31.902913 | orchestrator | 2026-02-05 04:02:31.902919 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-05 04:02:31.902926 | orchestrator | Thursday 05 February 2026 04:02:20 +0000 (0:00:04.556) 0:03:27.050 ***** 2026-02-05 04:02:31.902932 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:02:31.902938 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:02:31.902944 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:02:31.902950 | orchestrator | 2026-02-05 04:02:31.902956 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-05 04:02:31.902969 | orchestrator | Thursday 05 February 2026 04:02:22 +0000 (0:00:01.482) 0:03:28.533 ***** 2026-02-05 04:02:31.902975 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:02:31.902981 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:02:31.902987 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:02:31.902993 | orchestrator | 2026-02-05 04:02:31.902999 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-05 04:02:31.903005 | orchestrator | Thursday 05 February 2026 04:02:24 +0000 (0:00:01.801) 0:03:30.334 ***** 2026-02-05 04:02:31.903011 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:02:31.903017 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:02:31.903023 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:02:31.903029 | orchestrator | 2026-02-05 04:02:31.903036 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-05 04:02:31.903049 | orchestrator | Thursday 05 February 2026 04:02:25 +0000 (0:00:01.658) 0:03:31.993 ***** 2026-02-05 04:02:31.903055 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 04:02:31.903062 | orchestrator | 2026-02-05 04:02:31.903068 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-05 04:02:31.903074 | orchestrator | Thursday 05 February 2026 04:02:27 +0000 (0:00:01.720) 0:03:33.713 ***** 2026-02-05 04:02:31.903080 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:02:31.903086 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:02:31.903093 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:02:31.903099 | orchestrator | 2026-02-05 04:02:31.903105 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-05 04:02:31.903111 | orchestrator | Thursday 05 February 2026 04:02:28 +0000 (0:00:01.356) 0:03:35.070 ***** 2026-02-05 04:02:31.903117 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:02:31.903124 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:02:31.903130 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:02:31.903136 | orchestrator | 2026-02-05 04:02:31.903142 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-05 04:02:31.903148 | orchestrator | Thursday 05 February 2026 04:02:30 +0000 (0:00:01.583) 0:03:36.654 ***** 2026-02-05 04:02:31.903154 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:02:31.903160 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:02:31.903167 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:02:31.903173 | orchestrator | 2026-02-05 04:02:31.903179 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-05 04:02:31.903191 | orchestrator | Thursday 05 February 2026 04:02:31 +0000 (0:00:01.355) 0:03:38.010 ***** 2026-02-05 04:03:43.343704 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:03:43.343784 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:03:43.343791 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:03:43.343796 | orchestrator | 2026-02-05 04:03:43.343802 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-05 04:03:43.343807 | orchestrator | Thursday 05 February 2026 04:02:33 +0000 (0:00:01.682) 0:03:39.692 ***** 2026-02-05 04:03:43.343811 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:03:43.343815 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:03:43.343820 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:03:43.343823 | orchestrator | 2026-02-05 04:03:43.343827 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-05 04:03:43.343832 | orchestrator | Thursday 05 February 2026 04:02:35 +0000 (0:00:02.214) 0:03:41.907 ***** 2026-02-05 04:03:43.343836 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:03:43.343840 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:03:43.343843 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:03:43.343847 | orchestrator | 2026-02-05 04:03:43.343851 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-05 04:03:43.343855 | orchestrator | Thursday 05 February 2026 04:02:38 +0000 (0:00:02.323) 0:03:44.231 ***** 2026-02-05 04:03:43.343859 | orchestrator | changed: [testbed-node-3] 2026-02-05 04:03:43.343890 | orchestrator | changed: [testbed-node-4] 2026-02-05 04:03:43.343895 | orchestrator | changed: [testbed-node-5] 2026-02-05 04:03:43.343899 | orchestrator | 2026-02-05 04:03:43.343903 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-05 04:03:43.343906 | orchestrator | 2026-02-05 04:03:43.343910 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-05 04:03:43.343914 | orchestrator | Thursday 05 February 2026 04:02:46 +0000 (0:00:08.107) 0:03:52.338 ***** 2026-02-05 04:03:43.343918 | orchestrator | ok: [testbed-manager] 2026-02-05 04:03:43.343921 | orchestrator | 2026-02-05 04:03:43.343925 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-05 04:03:43.343929 | orchestrator | Thursday 05 February 2026 04:02:48 +0000 (0:00:02.126) 0:03:54.464 ***** 2026-02-05 04:03:43.343933 | orchestrator | ok: [testbed-manager] 2026-02-05 04:03:43.343937 | orchestrator | 2026-02-05 04:03:43.343940 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-05 04:03:43.343944 | orchestrator | Thursday 05 February 2026 04:02:49 +0000 (0:00:01.456) 0:03:55.921 ***** 2026-02-05 04:03:43.343948 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-05 04:03:43.343952 | orchestrator | 2026-02-05 04:03:43.343956 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-05 04:03:43.343960 | orchestrator | Thursday 05 February 2026 04:02:51 +0000 (0:00:01.637) 0:03:57.558 ***** 2026-02-05 04:03:43.343963 | orchestrator | changed: [testbed-manager] 2026-02-05 04:03:43.343967 | orchestrator | 2026-02-05 04:03:43.343971 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-05 04:03:43.343975 | orchestrator | Thursday 05 February 2026 04:02:53 +0000 (0:00:01.893) 0:03:59.452 ***** 2026-02-05 04:03:43.343978 | orchestrator | changed: [testbed-manager] 2026-02-05 04:03:43.343982 | orchestrator | 2026-02-05 04:03:43.343986 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-05 04:03:43.343990 | orchestrator | Thursday 05 February 2026 04:02:54 +0000 (0:00:01.637) 0:04:01.089 ***** 2026-02-05 04:03:43.343994 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 04:03:43.343998 | orchestrator | 2026-02-05 04:03:43.344002 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-05 04:03:43.344005 | orchestrator | Thursday 05 February 2026 04:02:58 +0000 (0:00:03.042) 0:04:04.132 ***** 2026-02-05 04:03:43.344009 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 04:03:43.344013 | orchestrator | 2026-02-05 04:03:43.344017 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-05 04:03:43.344021 | orchestrator | Thursday 05 February 2026 04:02:59 +0000 (0:00:01.898) 0:04:06.031 ***** 2026-02-05 04:03:43.344024 | orchestrator | ok: [testbed-manager] 2026-02-05 04:03:43.344028 | orchestrator | 2026-02-05 04:03:43.344032 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-05 04:03:43.344036 | orchestrator | Thursday 05 February 2026 04:03:01 +0000 (0:00:01.453) 0:04:07.485 ***** 2026-02-05 04:03:43.344039 | orchestrator | ok: [testbed-manager] 2026-02-05 04:03:43.344043 | orchestrator | 2026-02-05 04:03:43.344047 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-05 04:03:43.344051 | orchestrator | 2026-02-05 04:03:43.344054 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-05 04:03:43.344058 | orchestrator | Thursday 05 February 2026 04:03:02 +0000 (0:00:01.577) 0:04:09.062 ***** 2026-02-05 04:03:43.344062 | orchestrator | ok: [testbed-manager] 2026-02-05 04:03:43.344066 | orchestrator | 2026-02-05 04:03:43.344069 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-05 04:03:43.344073 | orchestrator | Thursday 05 February 2026 04:03:04 +0000 (0:00:01.201) 0:04:10.263 ***** 2026-02-05 04:03:43.344077 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 04:03:43.344081 | orchestrator | 2026-02-05 04:03:43.344085 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-05 04:03:43.344093 | orchestrator | Thursday 05 February 2026 04:03:05 +0000 (0:00:01.511) 0:04:11.775 ***** 2026-02-05 04:03:43.344097 | orchestrator | ok: [testbed-manager] 2026-02-05 04:03:43.344100 | orchestrator | 2026-02-05 04:03:43.344104 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-05 04:03:43.344108 | orchestrator | Thursday 05 February 2026 04:03:07 +0000 (0:00:01.961) 0:04:13.737 ***** 2026-02-05 04:03:43.344112 | orchestrator | ok: [testbed-manager] 2026-02-05 04:03:43.344115 | orchestrator | 2026-02-05 04:03:43.344120 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-05 04:03:43.344123 | orchestrator | Thursday 05 February 2026 04:03:10 +0000 (0:00:02.623) 0:04:16.361 ***** 2026-02-05 04:03:43.344127 | orchestrator | ok: [testbed-manager] 2026-02-05 04:03:43.344131 | orchestrator | 2026-02-05 04:03:43.344135 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-05 04:03:43.344148 | orchestrator | Thursday 05 February 2026 04:03:11 +0000 (0:00:01.396) 0:04:17.757 ***** 2026-02-05 04:03:43.344152 | orchestrator | ok: [testbed-manager] 2026-02-05 04:03:43.344156 | orchestrator | 2026-02-05 04:03:43.344159 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-05 04:03:43.344163 | orchestrator | Thursday 05 February 2026 04:03:13 +0000 (0:00:01.458) 0:04:19.216 ***** 2026-02-05 04:03:43.344167 | orchestrator | ok: [testbed-manager] 2026-02-05 04:03:43.344171 | orchestrator | 2026-02-05 04:03:43.344175 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-05 04:03:43.344179 | orchestrator | Thursday 05 February 2026 04:03:14 +0000 (0:00:01.642) 0:04:20.858 ***** 2026-02-05 04:03:43.344182 | orchestrator | ok: [testbed-manager] 2026-02-05 04:03:43.344186 | orchestrator | 2026-02-05 04:03:43.344190 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-05 04:03:43.344194 | orchestrator | Thursday 05 February 2026 04:03:17 +0000 (0:00:02.478) 0:04:23.337 ***** 2026-02-05 04:03:43.344198 | orchestrator | ok: [testbed-manager] 2026-02-05 04:03:43.344201 | orchestrator | 2026-02-05 04:03:43.344205 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-05 04:03:43.344209 | orchestrator | 2026-02-05 04:03:43.344213 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-05 04:03:43.344217 | orchestrator | Thursday 05 February 2026 04:03:18 +0000 (0:00:01.712) 0:04:25.050 ***** 2026-02-05 04:03:43.344221 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:03:43.344225 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:03:43.344229 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:03:43.344232 | orchestrator | 2026-02-05 04:03:43.344236 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-05 04:03:43.344241 | orchestrator | Thursday 05 February 2026 04:03:20 +0000 (0:00:01.388) 0:04:26.438 ***** 2026-02-05 04:03:43.344246 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:03:43.344251 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:03:43.344255 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:03:43.344259 | orchestrator | 2026-02-05 04:03:43.344264 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-05 04:03:43.344268 | orchestrator | Thursday 05 February 2026 04:03:21 +0000 (0:00:01.669) 0:04:28.107 ***** 2026-02-05 04:03:43.344273 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:03:43.344278 | orchestrator | 2026-02-05 04:03:43.344283 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-05 04:03:43.344288 | orchestrator | Thursday 05 February 2026 04:03:23 +0000 (0:00:01.828) 0:04:29.936 ***** 2026-02-05 04:03:43.344292 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 04:03:43.344297 | orchestrator | 2026-02-05 04:03:43.344301 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-05 04:03:43.344305 | orchestrator | Thursday 05 February 2026 04:03:25 +0000 (0:00:01.931) 0:04:31.867 ***** 2026-02-05 04:03:43.344310 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 04:03:43.344317 | orchestrator | 2026-02-05 04:03:43.344321 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-05 04:03:43.344325 | orchestrator | Thursday 05 February 2026 04:03:27 +0000 (0:00:01.866) 0:04:33.733 ***** 2026-02-05 04:03:43.344330 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:03:43.344334 | orchestrator | 2026-02-05 04:03:43.344339 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-05 04:03:43.344343 | orchestrator | Thursday 05 February 2026 04:03:28 +0000 (0:00:01.167) 0:04:34.901 ***** 2026-02-05 04:03:43.344348 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 04:03:43.344352 | orchestrator | 2026-02-05 04:03:43.344357 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-05 04:03:43.344361 | orchestrator | Thursday 05 February 2026 04:03:30 +0000 (0:00:02.043) 0:04:36.944 ***** 2026-02-05 04:03:43.344366 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 04:03:43.344370 | orchestrator | 2026-02-05 04:03:43.344375 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-05 04:03:43.344379 | orchestrator | Thursday 05 February 2026 04:03:33 +0000 (0:00:02.272) 0:04:39.217 ***** 2026-02-05 04:03:43.344383 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 04:03:43.344388 | orchestrator | 2026-02-05 04:03:43.344392 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-05 04:03:43.344397 | orchestrator | Thursday 05 February 2026 04:03:34 +0000 (0:00:01.125) 0:04:40.342 ***** 2026-02-05 04:03:43.344401 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 04:03:43.344405 | orchestrator | 2026-02-05 04:03:43.344410 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-05 04:03:43.344414 | orchestrator | Thursday 05 February 2026 04:03:35 +0000 (0:00:01.135) 0:04:41.478 ***** 2026-02-05 04:03:43.344419 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-05 04:03:43.344423 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-05 04:03:43.344428 | orchestrator | } 2026-02-05 04:03:43.344433 | orchestrator | 2026-02-05 04:03:43.344441 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-05 04:03:43.344446 | orchestrator | Thursday 05 February 2026 04:03:36 +0000 (0:00:01.139) 0:04:42.618 ***** 2026-02-05 04:03:43.344451 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:03:43.344455 | orchestrator | 2026-02-05 04:03:43.344460 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-05 04:03:43.344464 | orchestrator | Thursday 05 February 2026 04:03:37 +0000 (0:00:01.128) 0:04:43.747 ***** 2026-02-05 04:03:43.344469 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-05 04:03:43.344474 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-05 04:03:43.344478 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-05 04:03:43.344483 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-05 04:03:43.344487 | orchestrator | 2026-02-05 04:03:43.344492 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-05 04:03:43.344499 | orchestrator | Thursday 05 February 2026 04:03:43 +0000 (0:00:05.704) 0:04:49.451 ***** 2026-02-05 04:04:23.842278 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 04:04:23.842391 | orchestrator | 2026-02-05 04:04:23.842407 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-05 04:04:23.842419 | orchestrator | Thursday 05 February 2026 04:03:45 +0000 (0:00:02.509) 0:04:51.961 ***** 2026-02-05 04:04:23.842431 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 04:04:23.842441 | orchestrator | 2026-02-05 04:04:23.842451 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-05 04:04:23.842461 | orchestrator | Thursday 05 February 2026 04:03:48 +0000 (0:00:02.482) 0:04:54.444 ***** 2026-02-05 04:04:23.842471 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 04:04:23.842509 | orchestrator | 2026-02-05 04:04:23.842527 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-05 04:04:23.842543 | orchestrator | Thursday 05 February 2026 04:03:52 +0000 (0:00:04.289) 0:04:58.734 ***** 2026-02-05 04:04:23.842560 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:04:23.842574 | orchestrator | 2026-02-05 04:04:23.842591 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-05 04:04:23.842626 | orchestrator | Thursday 05 February 2026 04:03:53 +0000 (0:00:01.185) 0:04:59.919 ***** 2026-02-05 04:04:23.842643 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-05 04:04:23.842662 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-05 04:04:23.842678 | orchestrator | 2026-02-05 04:04:23.842724 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-05 04:04:23.842742 | orchestrator | Thursday 05 February 2026 04:03:56 +0000 (0:00:02.895) 0:05:02.814 ***** 2026-02-05 04:04:23.842753 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:04:23.842763 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:04:23.842773 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:04:23.842783 | orchestrator | 2026-02-05 04:04:23.842793 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-05 04:04:23.842805 | orchestrator | Thursday 05 February 2026 04:03:58 +0000 (0:00:01.362) 0:05:04.176 ***** 2026-02-05 04:04:23.842817 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:04:23.842829 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:04:23.842840 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:04:23.842852 | orchestrator | 2026-02-05 04:04:23.842863 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-05 04:04:23.842874 | orchestrator | 2026-02-05 04:04:23.842885 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-05 04:04:23.842898 | orchestrator | Thursday 05 February 2026 04:04:00 +0000 (0:00:02.071) 0:05:06.248 ***** 2026-02-05 04:04:23.842909 | orchestrator | ok: [testbed-manager] 2026-02-05 04:04:23.842920 | orchestrator | 2026-02-05 04:04:23.842932 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-05 04:04:23.842943 | orchestrator | Thursday 05 February 2026 04:04:01 +0000 (0:00:01.138) 0:05:07.386 ***** 2026-02-05 04:04:23.842954 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 04:04:23.842965 | orchestrator | 2026-02-05 04:04:23.842976 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-05 04:04:23.842987 | orchestrator | Thursday 05 February 2026 04:04:02 +0000 (0:00:01.470) 0:05:08.856 ***** 2026-02-05 04:04:23.842998 | orchestrator | ok: [testbed-manager] 2026-02-05 04:04:23.843010 | orchestrator | 2026-02-05 04:04:23.843022 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-05 04:04:23.843034 | orchestrator | 2026-02-05 04:04:23.843045 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-05 04:04:23.843056 | orchestrator | Thursday 05 February 2026 04:04:07 +0000 (0:00:05.076) 0:05:13.933 ***** 2026-02-05 04:04:23.843068 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:04:23.843079 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:04:23.843091 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:04:23.843102 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:04:23.843113 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:04:23.843125 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:04:23.843136 | orchestrator | 2026-02-05 04:04:23.843148 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-05 04:04:23.843160 | orchestrator | Thursday 05 February 2026 04:04:09 +0000 (0:00:01.950) 0:05:15.884 ***** 2026-02-05 04:04:23.843172 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-05 04:04:23.843182 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-05 04:04:23.843203 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-05 04:04:23.843213 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-05 04:04:23.843223 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-05 04:04:23.843232 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-05 04:04:23.843241 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-05 04:04:23.843251 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-05 04:04:23.843260 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-05 04:04:23.843270 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-05 04:04:23.843279 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-05 04:04:23.843289 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-05 04:04:23.843316 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-05 04:04:23.843326 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-05 04:04:23.843336 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-05 04:04:23.843346 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-05 04:04:23.843356 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-05 04:04:23.843365 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-05 04:04:23.843375 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-05 04:04:23.843384 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-05 04:04:23.843394 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-05 04:04:23.843404 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-05 04:04:23.843414 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-05 04:04:23.843424 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-05 04:04:23.843434 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-05 04:04:23.843444 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-05 04:04:23.843454 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-05 04:04:23.843464 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-05 04:04:23.843474 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-05 04:04:23.843483 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-05 04:04:23.843493 | orchestrator | 2026-02-05 04:04:23.843503 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-05 04:04:23.843512 | orchestrator | Thursday 05 February 2026 04:04:19 +0000 (0:00:09.370) 0:05:25.254 ***** 2026-02-05 04:04:23.843522 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:04:23.843532 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:04:23.843541 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:04:23.843551 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:04:23.843561 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:04:23.843571 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:04:23.843593 | orchestrator | 2026-02-05 04:04:23.843603 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-05 04:04:23.843613 | orchestrator | Thursday 05 February 2026 04:04:21 +0000 (0:00:01.998) 0:05:27.252 ***** 2026-02-05 04:04:23.843623 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:04:23.843633 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:04:23.843643 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:04:23.843652 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:04:23.843662 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:04:23.843671 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:04:23.843681 | orchestrator | 2026-02-05 04:04:23.843712 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 04:04:23.843730 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 04:04:23.843750 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-05 04:04:23.843768 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-05 04:04:23.843785 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-05 04:04:23.843795 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 04:04:23.843805 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 04:04:23.843815 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 04:04:23.843829 | orchestrator | 2026-02-05 04:04:23.843845 | orchestrator | 2026-02-05 04:04:23.843861 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 04:04:23.843877 | orchestrator | Thursday 05 February 2026 04:04:23 +0000 (0:00:02.691) 0:05:29.943 ***** 2026-02-05 04:04:23.843893 | orchestrator | =============================================================================== 2026-02-05 04:04:23.843903 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.03s 2026-02-05 04:04:23.843913 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 42.68s 2026-02-05 04:04:23.843923 | orchestrator | Manage labels ----------------------------------------------------------- 9.37s 2026-02-05 04:04:23.843941 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.11s 2026-02-05 04:04:24.306447 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.70s 2026-02-05 04:04:24.306596 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.08s 2026-02-05 04:04:24.306621 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.56s 2026-02-05 04:04:24.306641 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.29s 2026-02-05 04:04:24.306660 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 3.63s 2026-02-05 04:04:24.306756 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.31s 2026-02-05 04:04:24.307612 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 3.09s 2026-02-05 04:04:24.307654 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.04s 2026-02-05 04:04:24.307742 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.91s 2026-02-05 04:04:24.307756 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.90s 2026-02-05 04:04:24.307795 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.70s 2026-02-05 04:04:24.307811 | orchestrator | Manage taints ----------------------------------------------------------- 2.69s 2026-02-05 04:04:24.307834 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.62s 2026-02-05 04:04:24.307851 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.54s 2026-02-05 04:04:24.307866 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 2.51s 2026-02-05 04:04:24.307882 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.48s 2026-02-05 04:04:24.612199 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-05 04:04:24.612293 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-05 04:04:24.620334 | orchestrator | + set -e 2026-02-05 04:04:24.620825 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 04:04:24.620853 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 04:04:24.620865 | orchestrator | ++ INTERACTIVE=false 2026-02-05 04:04:24.620873 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 04:04:24.620881 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 04:04:24.620889 | orchestrator | + osism apply openstackclient 2026-02-05 04:04:36.762914 | orchestrator | 2026-02-05 04:04:36 | INFO  | Task cc158e78-8833-46a5-89d7-8fa60595a21a (openstackclient) was prepared for execution. 2026-02-05 04:04:36.763019 | orchestrator | 2026-02-05 04:04:36 | INFO  | It takes a moment until task cc158e78-8833-46a5-89d7-8fa60595a21a (openstackclient) has been started and output is visible here. 2026-02-05 04:05:04.900418 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-05 04:05:04.900527 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-05 04:05:04.900553 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-05 04:05:04.900557 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-05 04:05:04.900566 | orchestrator | 2026-02-05 04:05:04.900571 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-05 04:05:04.900575 | orchestrator | 2026-02-05 04:05:04.900580 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-05 04:05:04.900584 | orchestrator | Thursday 05 February 2026 04:04:43 +0000 (0:00:01.740) 0:00:01.740 ***** 2026-02-05 04:05:04.900589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-05 04:05:04.900594 | orchestrator | 2026-02-05 04:05:04.900598 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-05 04:05:04.900602 | orchestrator | Thursday 05 February 2026 04:04:44 +0000 (0:00:00.880) 0:00:02.620 ***** 2026-02-05 04:05:04.900606 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-05 04:05:04.900611 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-05 04:05:04.900615 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-05 04:05:04.900619 | orchestrator | 2026-02-05 04:05:04.900623 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-05 04:05:04.900646 | orchestrator | Thursday 05 February 2026 04:04:45 +0000 (0:00:01.489) 0:00:04.110 ***** 2026-02-05 04:05:04.900651 | orchestrator | changed: [testbed-manager] 2026-02-05 04:05:04.900655 | orchestrator | 2026-02-05 04:05:04.900659 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-05 04:05:04.900663 | orchestrator | Thursday 05 February 2026 04:04:46 +0000 (0:00:01.274) 0:00:05.384 ***** 2026-02-05 04:05:04.900667 | orchestrator | ok: [testbed-manager] 2026-02-05 04:05:04.900675 | orchestrator | 2026-02-05 04:05:04.900681 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-05 04:05:04.900711 | orchestrator | Thursday 05 February 2026 04:04:47 +0000 (0:00:01.118) 0:00:06.503 ***** 2026-02-05 04:05:04.900780 | orchestrator | ok: [testbed-manager] 2026-02-05 04:05:04.900787 | orchestrator | 2026-02-05 04:05:04.900793 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-05 04:05:04.900799 | orchestrator | Thursday 05 February 2026 04:04:48 +0000 (0:00:00.976) 0:00:07.480 ***** 2026-02-05 04:05:04.900805 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-05 04:05:04.900811 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-05 04:05:04.900823 | orchestrator | ok: [testbed-manager] 2026-02-05 04:05:04.900828 | orchestrator | 2026-02-05 04:05:04.900832 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-05 04:05:04.900836 | orchestrator | Thursday 05 February 2026 04:04:49 +0000 (0:00:00.707) 0:00:08.187 ***** 2026-02-05 04:05:04.900840 | orchestrator | changed: [testbed-manager] 2026-02-05 04:05:04.900844 | orchestrator | 2026-02-05 04:05:04.900848 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-05 04:05:04.900851 | orchestrator | Thursday 05 February 2026 04:05:01 +0000 (0:00:11.798) 0:00:19.986 ***** 2026-02-05 04:05:04.900855 | orchestrator | changed: [testbed-manager] 2026-02-05 04:05:04.900859 | orchestrator | 2026-02-05 04:05:04.900863 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-05 04:05:04.900877 | orchestrator | Thursday 05 February 2026 04:05:02 +0000 (0:00:01.360) 0:00:21.347 ***** 2026-02-05 04:05:04.900881 | orchestrator | changed: [testbed-manager] 2026-02-05 04:05:04.900885 | orchestrator | 2026-02-05 04:05:04.900889 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-05 04:05:04.900892 | orchestrator | Thursday 05 February 2026 04:05:03 +0000 (0:00:00.616) 0:00:21.963 ***** 2026-02-05 04:05:04.900896 | orchestrator | ok: [testbed-manager] 2026-02-05 04:05:04.900900 | orchestrator | 2026-02-05 04:05:04.900904 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 04:05:04.900908 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 04:05:04.900913 | orchestrator | 2026-02-05 04:05:04.900917 | orchestrator | 2026-02-05 04:05:04.900920 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 04:05:04.900924 | orchestrator | Thursday 05 February 2026 04:05:04 +0000 (0:00:01.118) 0:00:23.081 ***** 2026-02-05 04:05:04.900928 | orchestrator | =============================================================================== 2026-02-05 04:05:04.900932 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 11.80s 2026-02-05 04:05:04.900936 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.49s 2026-02-05 04:05:04.900939 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.36s 2026-02-05 04:05:04.900943 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.27s 2026-02-05 04:05:04.900947 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.12s 2026-02-05 04:05:04.900951 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.12s 2026-02-05 04:05:04.900969 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.98s 2026-02-05 04:05:04.900974 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.88s 2026-02-05 04:05:04.900979 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.71s 2026-02-05 04:05:04.900984 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.62s 2026-02-05 04:05:05.267867 | orchestrator | + osism apply -a upgrade common 2026-02-05 04:05:07.464775 | orchestrator | 2026-02-05 04:05:07 | INFO  | Task 7414ec0e-e7d6-4176-b5da-4d2ac45a4e51 (common) was prepared for execution. 2026-02-05 04:05:07.464872 | orchestrator | 2026-02-05 04:05:07 | INFO  | It takes a moment until task 7414ec0e-e7d6-4176-b5da-4d2ac45a4e51 (common) has been started and output is visible here. 2026-02-05 04:05:28.761648 | orchestrator | 2026-02-05 04:05:28.761756 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-05 04:05:28.761766 | orchestrator | 2026-02-05 04:05:28.761772 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-05 04:05:28.761778 | orchestrator | Thursday 05 February 2026 04:05:14 +0000 (0:00:02.363) 0:00:02.363 ***** 2026-02-05 04:05:28.761785 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 04:05:28.761792 | orchestrator | 2026-02-05 04:05:28.761797 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-05 04:05:28.761803 | orchestrator | Thursday 05 February 2026 04:05:18 +0000 (0:00:03.794) 0:00:06.158 ***** 2026-02-05 04:05:28.761808 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 04:05:28.761829 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 04:05:28.761836 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 04:05:28.761845 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 04:05:28.761854 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 04:05:28.761861 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 04:05:28.761869 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 04:05:28.761877 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 04:05:28.761886 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 04:05:28.761895 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 04:05:28.761903 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 04:05:28.761911 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 04:05:28.761919 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 04:05:28.761928 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 04:05:28.761937 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 04:05:28.761945 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 04:05:28.761954 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 04:05:28.761964 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 04:05:28.761973 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 04:05:28.761982 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 04:05:28.761989 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 04:05:28.761994 | orchestrator | 2026-02-05 04:05:28.762000 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-05 04:05:28.762005 | orchestrator | Thursday 05 February 2026 04:05:23 +0000 (0:00:05.149) 0:00:11.307 ***** 2026-02-05 04:05:28.762011 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 04:05:28.762056 | orchestrator | 2026-02-05 04:05:28.762062 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-05 04:05:28.762085 | orchestrator | Thursday 05 February 2026 04:05:26 +0000 (0:00:02.923) 0:00:14.230 ***** 2026-02-05 04:05:28.762094 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:28.762113 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:28.762139 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:28.762258 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:28.762266 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:28.762273 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:28.762280 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:28.762292 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:28.762298 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:28.762309 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:33.684286 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:33.684379 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:33.684390 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:33.684398 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:33.684405 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:33.684433 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:33.684443 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:33.684451 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:33.684480 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:33.684488 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:33.684495 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:33.684502 | orchestrator | 2026-02-05 04:05:33.684511 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-05 04:05:33.684519 | orchestrator | Thursday 05 February 2026 04:05:32 +0000 (0:00:06.552) 0:00:20.783 ***** 2026-02-05 04:05:33.684528 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:33.684541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:33.684549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:33.684556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:33.684567 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:36.119692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:36.119892 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:05:36.119927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:36.119949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:36.120001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:36.120022 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:05:36.120041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:36.120060 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:05:36.120080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:36.120099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:36.120181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:36.120198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:36.120212 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:05:36.120237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:36.120252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:36.120265 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:05:36.120279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:36.120293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:36.120307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:36.120335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:39.566657 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:05:39.566833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:39.566878 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:05:39.566892 | orchestrator | 2026-02-05 04:05:39.566905 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-05 04:05:39.566919 | orchestrator | Thursday 05 February 2026 04:05:36 +0000 (0:00:03.389) 0:00:24.173 ***** 2026-02-05 04:05:39.566933 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:39.566948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:39.566961 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:39.566975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:39.566988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:39.567000 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:39.567045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:39.567073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:39.567087 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:05:39.567099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:39.567122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:39.567134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:39.567146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:39.567158 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:05:39.567170 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:05:39.567188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:39.567217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:51.804221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:51.804349 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:05:51.804396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:51.804420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:51.804440 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:05:51.804453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:51.804465 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:05:51.804476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:05:51.804488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:51.804541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:05:51.804554 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:05:51.804566 | orchestrator | 2026-02-05 04:05:51.804578 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-05 04:05:51.804591 | orchestrator | Thursday 05 February 2026 04:05:39 +0000 (0:00:03.463) 0:00:27.636 ***** 2026-02-05 04:05:51.804602 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:05:51.804613 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:05:51.804624 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:05:51.804635 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:05:51.804665 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:05:51.804676 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:05:51.804687 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:05:51.804698 | orchestrator | 2026-02-05 04:05:51.804710 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-05 04:05:51.804721 | orchestrator | Thursday 05 February 2026 04:05:41 +0000 (0:00:02.233) 0:00:29.870 ***** 2026-02-05 04:05:51.804787 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:05:51.804802 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:05:51.804816 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:05:51.804829 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:05:51.804843 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:05:51.804855 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:05:51.804868 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:05:51.804881 | orchestrator | 2026-02-05 04:05:51.804894 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-05 04:05:51.804907 | orchestrator | Thursday 05 February 2026 04:05:43 +0000 (0:00:02.096) 0:00:31.966 ***** 2026-02-05 04:05:51.804920 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:05:51.804932 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:05:51.804945 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:05:51.804958 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:05:51.804970 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:05:51.804983 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:05:51.804996 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:05:51.805009 | orchestrator | 2026-02-05 04:05:51.805022 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-05 04:05:51.805035 | orchestrator | Thursday 05 February 2026 04:05:45 +0000 (0:00:02.036) 0:00:34.003 ***** 2026-02-05 04:05:51.805049 | orchestrator | changed: [testbed-manager] 2026-02-05 04:05:51.805062 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:05:51.805074 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:05:51.805087 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:05:51.805100 | orchestrator | changed: [testbed-node-3] 2026-02-05 04:05:51.805112 | orchestrator | changed: [testbed-node-4] 2026-02-05 04:05:51.805125 | orchestrator | changed: [testbed-node-5] 2026-02-05 04:05:51.805138 | orchestrator | 2026-02-05 04:05:51.805150 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-05 04:05:51.805161 | orchestrator | Thursday 05 February 2026 04:05:49 +0000 (0:00:03.289) 0:00:37.292 ***** 2026-02-05 04:05:51.805173 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:51.805196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:51.805208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:51.805225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:51.805247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:56.365476 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365666 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:56.365840 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:05:56.365853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365869 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:05:56.365902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:19.261409 | orchestrator | 2026-02-05 04:06:19.261520 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-05 04:06:19.261536 | orchestrator | Thursday 05 February 2026 04:05:56 +0000 (0:00:07.129) 0:00:44.422 ***** 2026-02-05 04:06:19.261547 | orchestrator | [WARNING]: Skipped 2026-02-05 04:06:19.261559 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-05 04:06:19.261570 | orchestrator | to this access issue: 2026-02-05 04:06:19.261580 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-05 04:06:19.261590 | orchestrator | directory 2026-02-05 04:06:19.261600 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 04:06:19.261611 | orchestrator | 2026-02-05 04:06:19.261622 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-05 04:06:19.261656 | orchestrator | Thursday 05 February 2026 04:05:58 +0000 (0:00:02.469) 0:00:46.892 ***** 2026-02-05 04:06:19.261667 | orchestrator | [WARNING]: Skipped 2026-02-05 04:06:19.261676 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-05 04:06:19.261686 | orchestrator | to this access issue: 2026-02-05 04:06:19.261696 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-05 04:06:19.261705 | orchestrator | directory 2026-02-05 04:06:19.261715 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 04:06:19.261725 | orchestrator | 2026-02-05 04:06:19.261734 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-05 04:06:19.261773 | orchestrator | Thursday 05 February 2026 04:06:00 +0000 (0:00:01.886) 0:00:48.778 ***** 2026-02-05 04:06:19.261784 | orchestrator | [WARNING]: Skipped 2026-02-05 04:06:19.261794 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-05 04:06:19.261803 | orchestrator | to this access issue: 2026-02-05 04:06:19.261814 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-05 04:06:19.261823 | orchestrator | directory 2026-02-05 04:06:19.261833 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 04:06:19.261843 | orchestrator | 2026-02-05 04:06:19.261852 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-05 04:06:19.261862 | orchestrator | Thursday 05 February 2026 04:06:02 +0000 (0:00:01.856) 0:00:50.635 ***** 2026-02-05 04:06:19.261872 | orchestrator | [WARNING]: Skipped 2026-02-05 04:06:19.261882 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-05 04:06:19.261891 | orchestrator | to this access issue: 2026-02-05 04:06:19.261901 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-05 04:06:19.261911 | orchestrator | directory 2026-02-05 04:06:19.261920 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 04:06:19.261930 | orchestrator | 2026-02-05 04:06:19.261940 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-05 04:06:19.261952 | orchestrator | Thursday 05 February 2026 04:06:04 +0000 (0:00:01.884) 0:00:52.519 ***** 2026-02-05 04:06:19.261963 | orchestrator | changed: [testbed-manager] 2026-02-05 04:06:19.261975 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:06:19.261986 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:06:19.261996 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:06:19.262007 | orchestrator | changed: [testbed-node-3] 2026-02-05 04:06:19.262076 | orchestrator | changed: [testbed-node-4] 2026-02-05 04:06:19.262089 | orchestrator | changed: [testbed-node-5] 2026-02-05 04:06:19.262100 | orchestrator | 2026-02-05 04:06:19.262111 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-05 04:06:19.262123 | orchestrator | Thursday 05 February 2026 04:06:09 +0000 (0:00:04.723) 0:00:57.243 ***** 2026-02-05 04:06:19.262134 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 04:06:19.262146 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 04:06:19.262158 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 04:06:19.262169 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 04:06:19.262195 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 04:06:19.262205 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 04:06:19.262215 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 04:06:19.262224 | orchestrator | 2026-02-05 04:06:19.262234 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-05 04:06:19.262252 | orchestrator | Thursday 05 February 2026 04:06:12 +0000 (0:00:03.588) 0:01:00.831 ***** 2026-02-05 04:06:19.262262 | orchestrator | ok: [testbed-manager] 2026-02-05 04:06:19.262272 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:06:19.262282 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:06:19.262292 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:06:19.262301 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:06:19.262311 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:06:19.262320 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:06:19.262330 | orchestrator | 2026-02-05 04:06:19.262339 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-05 04:06:19.262349 | orchestrator | Thursday 05 February 2026 04:06:16 +0000 (0:00:03.426) 0:01:04.258 ***** 2026-02-05 04:06:19.262379 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:19.262394 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:19.262404 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:19.262415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:19.262425 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:19.262442 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:19.262458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:19.262469 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:19.262488 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:29.015626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:29.015736 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:29.015804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:29.015818 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:29.015855 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:29.015867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:29.015877 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:29.015905 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:29.015916 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:29.015943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:29.015954 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:29.015973 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:29.015984 | orchestrator | 2026-02-05 04:06:29.015997 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-05 04:06:29.016008 | orchestrator | Thursday 05 February 2026 04:06:19 +0000 (0:00:03.070) 0:01:07.329 ***** 2026-02-05 04:06:29.016023 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 04:06:29.016035 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 04:06:29.016045 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 04:06:29.016055 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 04:06:29.016064 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 04:06:29.016073 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 04:06:29.016086 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 04:06:29.016103 | orchestrator | 2026-02-05 04:06:29.016113 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-05 04:06:29.016123 | orchestrator | Thursday 05 February 2026 04:06:22 +0000 (0:00:03.565) 0:01:10.894 ***** 2026-02-05 04:06:29.016132 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 04:06:29.016142 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 04:06:29.016154 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 04:06:29.016166 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 04:06:29.016177 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 04:06:29.016189 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 04:06:29.016201 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 04:06:29.016212 | orchestrator | 2026-02-05 04:06:29.016224 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-05 04:06:29.016236 | orchestrator | Thursday 05 February 2026 04:06:26 +0000 (0:00:03.820) 0:01:14.714 ***** 2026-02-05 04:06:29.016258 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:32.576450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:32.576537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:32.576570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:32.576590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:32.576597 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:32.576606 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:32.576618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:32.576647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:32.576673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:32.576681 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:32.576695 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:32.576709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:32.576716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:32.576722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:32.576730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:32.576769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 04:06:35.366224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:35.366391 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:35.366476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:35.366486 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:06:35.366493 | orchestrator | 2026-02-05 04:06:35.366499 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-05 04:06:35.366506 | orchestrator | Thursday 05 February 2026 04:06:32 +0000 (0:00:05.932) 0:01:20.647 ***** 2026-02-05 04:06:35.366513 | orchestrator | changed: [testbed-manager] => { 2026-02-05 04:06:35.366519 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:06:35.366552 | orchestrator | } 2026-02-05 04:06:35.366557 | orchestrator | changed: [testbed-node-0] => { 2026-02-05 04:06:35.366563 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:06:35.366568 | orchestrator | } 2026-02-05 04:06:35.366573 | orchestrator | changed: [testbed-node-1] => { 2026-02-05 04:06:35.366578 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:06:35.366583 | orchestrator | } 2026-02-05 04:06:35.366589 | orchestrator | changed: [testbed-node-2] => { 2026-02-05 04:06:35.366594 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:06:35.366599 | orchestrator | } 2026-02-05 04:06:35.366604 | orchestrator | changed: [testbed-node-3] => { 2026-02-05 04:06:35.366609 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:06:35.366614 | orchestrator | } 2026-02-05 04:06:35.366619 | orchestrator | changed: [testbed-node-4] => { 2026-02-05 04:06:35.366624 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:06:35.366629 | orchestrator | } 2026-02-05 04:06:35.366635 | orchestrator | changed: [testbed-node-5] => { 2026-02-05 04:06:35.366640 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:06:35.366645 | orchestrator | } 2026-02-05 04:06:35.366667 | orchestrator | 2026-02-05 04:06:35.366673 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-05 04:06:35.366678 | orchestrator | Thursday 05 February 2026 04:06:34 +0000 (0:00:02.116) 0:01:22.763 ***** 2026-02-05 04:06:35.366684 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:06:35.366706 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:35.366712 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:35.366717 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:06:35.366723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:06:35.366729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:35.366742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:35.366769 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:06:35.366777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:06:35.366788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:35.366800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:06:35.366807 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:06:35.366819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:08:44.609901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:08:44.610110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:08:44.610169 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:08:44.610185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:08:44.610199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:08:44.610236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:08:44.610248 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:08:44.610260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:08:44.610272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:08:44.610308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:08:44.610320 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:08:44.610332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 04:08:44.610349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:08:44.610361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:08:44.610381 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:08:44.610393 | orchestrator | 2026-02-05 04:08:44.610406 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 04:08:44.610421 | orchestrator | Thursday 05 February 2026 04:06:37 +0000 (0:00:03.023) 0:01:25.787 ***** 2026-02-05 04:08:44.610435 | orchestrator | 2026-02-05 04:08:44.610448 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 04:08:44.610461 | orchestrator | Thursday 05 February 2026 04:06:38 +0000 (0:00:00.453) 0:01:26.240 ***** 2026-02-05 04:08:44.610474 | orchestrator | 2026-02-05 04:08:44.610487 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 04:08:44.610500 | orchestrator | Thursday 05 February 2026 04:06:38 +0000 (0:00:00.454) 0:01:26.694 ***** 2026-02-05 04:08:44.610513 | orchestrator | 2026-02-05 04:08:44.610526 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 04:08:44.610538 | orchestrator | Thursday 05 February 2026 04:06:39 +0000 (0:00:00.464) 0:01:27.159 ***** 2026-02-05 04:08:44.610551 | orchestrator | 2026-02-05 04:08:44.610564 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 04:08:44.610576 | orchestrator | Thursday 05 February 2026 04:06:39 +0000 (0:00:00.445) 0:01:27.604 ***** 2026-02-05 04:08:44.610588 | orchestrator | 2026-02-05 04:08:44.610602 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 04:08:44.610614 | orchestrator | Thursday 05 February 2026 04:06:40 +0000 (0:00:00.720) 0:01:28.325 ***** 2026-02-05 04:08:44.610627 | orchestrator | 2026-02-05 04:08:44.610639 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 04:08:44.610652 | orchestrator | Thursday 05 February 2026 04:06:40 +0000 (0:00:00.488) 0:01:28.814 ***** 2026-02-05 04:08:44.610665 | orchestrator | 2026-02-05 04:08:44.610678 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-05 04:08:44.610692 | orchestrator | Thursday 05 February 2026 04:06:41 +0000 (0:00:00.831) 0:01:29.645 ***** 2026-02-05 04:08:44.610705 | orchestrator | changed: [testbed-manager] 2026-02-05 04:08:44.610718 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:08:44.610731 | orchestrator | changed: [testbed-node-3] 2026-02-05 04:08:44.610744 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:08:44.610755 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:08:44.610766 | orchestrator | changed: [testbed-node-4] 2026-02-05 04:08:44.610777 | orchestrator | changed: [testbed-node-5] 2026-02-05 04:08:44.610812 | orchestrator | 2026-02-05 04:08:44.610825 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-05 04:08:44.610836 | orchestrator | Thursday 05 February 2026 04:07:42 +0000 (0:01:01.342) 0:02:30.988 ***** 2026-02-05 04:08:44.610847 | orchestrator | changed: [testbed-manager] 2026-02-05 04:08:44.610858 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:08:44.610868 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:08:44.610879 | orchestrator | changed: [testbed-node-3] 2026-02-05 04:08:44.610890 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:08:44.610901 | orchestrator | changed: [testbed-node-4] 2026-02-05 04:08:44.610912 | orchestrator | changed: [testbed-node-5] 2026-02-05 04:08:44.610923 | orchestrator | 2026-02-05 04:08:44.610934 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-05 04:08:44.610953 | orchestrator | Thursday 05 February 2026 04:08:44 +0000 (0:01:01.686) 0:03:32.675 ***** 2026-02-05 04:09:08.381957 | orchestrator | ok: [testbed-manager] 2026-02-05 04:09:08.382104 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:09:08.382118 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:09:08.382127 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:09:08.382166 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:09:08.382181 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:09:08.382192 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:09:08.382204 | orchestrator | 2026-02-05 04:09:08.382218 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-05 04:09:08.382233 | orchestrator | Thursday 05 February 2026 04:08:48 +0000 (0:00:03.692) 0:03:36.367 ***** 2026-02-05 04:09:08.382247 | orchestrator | changed: [testbed-manager] 2026-02-05 04:09:08.382261 | orchestrator | changed: [testbed-node-3] 2026-02-05 04:09:08.382273 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:09:08.382285 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:09:08.382293 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:09:08.382300 | orchestrator | changed: [testbed-node-4] 2026-02-05 04:09:08.382308 | orchestrator | changed: [testbed-node-5] 2026-02-05 04:09:08.382315 | orchestrator | 2026-02-05 04:09:08.382323 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 04:09:08.382331 | orchestrator | testbed-manager : ok=22  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 04:09:08.382355 | orchestrator | testbed-node-0 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 04:09:08.382363 | orchestrator | testbed-node-1 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 04:09:08.382370 | orchestrator | testbed-node-2 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 04:09:08.382377 | orchestrator | testbed-node-3 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 04:09:08.382385 | orchestrator | testbed-node-4 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 04:09:08.382392 | orchestrator | testbed-node-5 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 04:09:08.382399 | orchestrator | 2026-02-05 04:09:08.382406 | orchestrator | 2026-02-05 04:09:08.382414 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 04:09:08.382424 | orchestrator | Thursday 05 February 2026 04:09:07 +0000 (0:00:19.518) 0:03:55.886 ***** 2026-02-05 04:09:08.382433 | orchestrator | =============================================================================== 2026-02-05 04:09:08.382441 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 61.69s 2026-02-05 04:09:08.382450 | orchestrator | common : Restart fluentd container ------------------------------------- 61.34s 2026-02-05 04:09:08.382459 | orchestrator | common : Restart cron container ---------------------------------------- 19.52s 2026-02-05 04:09:08.382467 | orchestrator | common : Copying over config.json files for services -------------------- 7.13s 2026-02-05 04:09:08.382476 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.55s 2026-02-05 04:09:08.382485 | orchestrator | service-check-containers : common | Check containers -------------------- 5.93s 2026-02-05 04:09:08.382493 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.15s 2026-02-05 04:09:08.382502 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.72s 2026-02-05 04:09:08.382511 | orchestrator | common : Flush handlers ------------------------------------------------- 3.86s 2026-02-05 04:09:08.382520 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.82s 2026-02-05 04:09:08.382529 | orchestrator | common : include_tasks -------------------------------------------------- 3.79s 2026-02-05 04:09:08.382538 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.69s 2026-02-05 04:09:08.382547 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.59s 2026-02-05 04:09:08.382564 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.57s 2026-02-05 04:09:08.382572 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.46s 2026-02-05 04:09:08.382582 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.43s 2026-02-05 04:09:08.382591 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.39s 2026-02-05 04:09:08.382601 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.29s 2026-02-05 04:09:08.382609 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.07s 2026-02-05 04:09:08.382619 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.02s 2026-02-05 04:09:08.717210 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-05 04:09:10.818305 | orchestrator | 2026-02-05 04:09:10 | INFO  | Task 363acdc8-edaa-4dab-9c41-32d820a474ea (loadbalancer) was prepared for execution. 2026-02-05 04:09:10.818382 | orchestrator | 2026-02-05 04:09:10 | INFO  | It takes a moment until task 363acdc8-edaa-4dab-9c41-32d820a474ea (loadbalancer) has been started and output is visible here. 2026-02-05 04:09:48.257744 | orchestrator | 2026-02-05 04:09:48.257933 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 04:09:48.257965 | orchestrator | 2026-02-05 04:09:48.257985 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 04:09:48.258002 | orchestrator | Thursday 05 February 2026 04:09:17 +0000 (0:00:01.503) 0:00:01.503 ***** 2026-02-05 04:09:48.258084 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:09:48.258099 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:09:48.258109 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:09:48.258119 | orchestrator | 2026-02-05 04:09:48.258128 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 04:09:48.258138 | orchestrator | Thursday 05 February 2026 04:09:19 +0000 (0:00:01.843) 0:00:03.347 ***** 2026-02-05 04:09:48.258150 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-05 04:09:48.258160 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-05 04:09:48.258170 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-05 04:09:48.258180 | orchestrator | 2026-02-05 04:09:48.258190 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-05 04:09:48.258199 | orchestrator | 2026-02-05 04:09:48.258209 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-05 04:09:48.258219 | orchestrator | Thursday 05 February 2026 04:09:21 +0000 (0:00:02.572) 0:00:05.919 ***** 2026-02-05 04:09:48.258246 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:09:48.258257 | orchestrator | 2026-02-05 04:09:48.258267 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-05 04:09:48.258277 | orchestrator | Thursday 05 February 2026 04:09:24 +0000 (0:00:02.317) 0:00:08.236 ***** 2026-02-05 04:09:48.258289 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:09:48.258300 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:09:48.258311 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:09:48.258324 | orchestrator | 2026-02-05 04:09:48.258336 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-05 04:09:48.258347 | orchestrator | Thursday 05 February 2026 04:09:26 +0000 (0:00:01.989) 0:00:10.226 ***** 2026-02-05 04:09:48.258359 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:09:48.258384 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:09:48.258405 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:09:48.258418 | orchestrator | 2026-02-05 04:09:48.258429 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-05 04:09:48.258441 | orchestrator | Thursday 05 February 2026 04:09:28 +0000 (0:00:02.123) 0:00:12.350 ***** 2026-02-05 04:09:48.258452 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:09:48.258487 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:09:48.258500 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:09:48.258511 | orchestrator | 2026-02-05 04:09:48.258523 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-05 04:09:48.258534 | orchestrator | Thursday 05 February 2026 04:09:30 +0000 (0:00:01.666) 0:00:14.016 ***** 2026-02-05 04:09:48.258545 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:09:48.258557 | orchestrator | 2026-02-05 04:09:48.258568 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-05 04:09:48.258580 | orchestrator | Thursday 05 February 2026 04:09:31 +0000 (0:00:01.927) 0:00:15.944 ***** 2026-02-05 04:09:48.258591 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:09:48.258602 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:09:48.258614 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:09:48.258625 | orchestrator | 2026-02-05 04:09:48.258637 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-05 04:09:48.258648 | orchestrator | Thursday 05 February 2026 04:09:33 +0000 (0:00:01.736) 0:00:17.681 ***** 2026-02-05 04:09:48.258660 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-05 04:09:48.258674 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-05 04:09:48.258691 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-05 04:09:48.258706 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-05 04:09:48.258725 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-05 04:09:48.258741 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-05 04:09:48.258757 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-05 04:09:48.258771 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-05 04:09:48.258788 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-05 04:09:48.258833 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-05 04:09:48.258851 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-05 04:09:48.258868 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-05 04:09:48.258878 | orchestrator | 2026-02-05 04:09:48.258888 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-05 04:09:48.258897 | orchestrator | Thursday 05 February 2026 04:09:38 +0000 (0:00:05.298) 0:00:22.980 ***** 2026-02-05 04:09:48.258907 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-05 04:09:48.258917 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-05 04:09:48.258927 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-05 04:09:48.258937 | orchestrator | 2026-02-05 04:09:48.258947 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-05 04:09:48.258978 | orchestrator | Thursday 05 February 2026 04:09:41 +0000 (0:00:02.010) 0:00:24.991 ***** 2026-02-05 04:09:48.258988 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-05 04:09:48.258998 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-05 04:09:48.259008 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-05 04:09:48.259017 | orchestrator | 2026-02-05 04:09:48.259027 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-05 04:09:48.259037 | orchestrator | Thursday 05 February 2026 04:09:43 +0000 (0:00:02.418) 0:00:27.409 ***** 2026-02-05 04:09:48.259047 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-05 04:09:48.259057 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:09:48.259067 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-05 04:09:48.259092 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:09:48.259102 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-05 04:09:48.259112 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:09:48.259121 | orchestrator | 2026-02-05 04:09:48.259132 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-05 04:09:48.259141 | orchestrator | Thursday 05 February 2026 04:09:45 +0000 (0:00:01.993) 0:00:29.402 ***** 2026-02-05 04:09:48.259161 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 04:09:48.259178 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 04:09:48.259189 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 04:09:48.259199 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:09:48.259209 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:09:48.259227 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:09:59.395931 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:09:59.396056 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:09:59.396072 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:09:59.396084 | orchestrator | 2026-02-05 04:09:59.396096 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-05 04:09:59.396107 | orchestrator | Thursday 05 February 2026 04:09:48 +0000 (0:00:02.827) 0:00:32.229 ***** 2026-02-05 04:09:59.396116 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:09:59.396126 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:09:59.396135 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:09:59.396144 | orchestrator | 2026-02-05 04:09:59.396153 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-05 04:09:59.396162 | orchestrator | Thursday 05 February 2026 04:09:50 +0000 (0:00:02.056) 0:00:34.286 ***** 2026-02-05 04:09:59.396171 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-05 04:09:59.396180 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-05 04:09:59.396191 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-05 04:09:59.396206 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-05 04:09:59.396221 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-05 04:09:59.396234 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-05 04:09:59.396249 | orchestrator | 2026-02-05 04:09:59.396264 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-05 04:09:59.396278 | orchestrator | Thursday 05 February 2026 04:09:53 +0000 (0:00:02.931) 0:00:37.217 ***** 2026-02-05 04:09:59.396292 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:09:59.396308 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:09:59.396322 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:09:59.396336 | orchestrator | 2026-02-05 04:09:59.396352 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-05 04:09:59.396366 | orchestrator | Thursday 05 February 2026 04:09:55 +0000 (0:00:02.286) 0:00:39.504 ***** 2026-02-05 04:09:59.396382 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:09:59.396396 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:09:59.396438 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:09:59.396454 | orchestrator | 2026-02-05 04:09:59.396468 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-05 04:09:59.396479 | orchestrator | Thursday 05 February 2026 04:09:57 +0000 (0:00:02.166) 0:00:41.670 ***** 2026-02-05 04:09:59.396491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 04:09:59.396522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:09:59.396540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:09:59.396552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9c8b629d84c1d9ad884f8280e638ebc31232a264', '__omit_place_holder__9c8b629d84c1d9ad884f8280e638ebc31232a264'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 04:09:59.396563 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:09:59.396575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 04:09:59.396585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:09:59.396603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:09:59.396614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9c8b629d84c1d9ad884f8280e638ebc31232a264', '__omit_place_holder__9c8b629d84c1d9ad884f8280e638ebc31232a264'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 04:09:59.396624 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:09:59.396647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 04:10:03.626975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:10:03.627083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:10:03.627101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9c8b629d84c1d9ad884f8280e638ebc31232a264', '__omit_place_holder__9c8b629d84c1d9ad884f8280e638ebc31232a264'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 04:10:03.627142 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:10:03.627156 | orchestrator | 2026-02-05 04:10:03.627169 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-05 04:10:03.627183 | orchestrator | Thursday 05 February 2026 04:09:59 +0000 (0:00:01.687) 0:00:43.358 ***** 2026-02-05 04:10:03.627196 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 04:10:03.627208 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 04:10:03.627244 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 04:10:03.627291 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:10:03.627313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:10:03.627332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9c8b629d84c1d9ad884f8280e638ebc31232a264', '__omit_place_holder__9c8b629d84c1d9ad884f8280e638ebc31232a264'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 04:10:03.627362 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:10:03.627381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:10:03.627400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9c8b629d84c1d9ad884f8280e638ebc31232a264', '__omit_place_holder__9c8b629d84c1d9ad884f8280e638ebc31232a264'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 04:10:03.627443 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:10:17.643204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:10:17.643312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9c8b629d84c1d9ad884f8280e638ebc31232a264', '__omit_place_holder__9c8b629d84c1d9ad884f8280e638ebc31232a264'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 04:10:17.643340 | orchestrator | 2026-02-05 04:10:17.643347 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-05 04:10:17.643352 | orchestrator | Thursday 05 February 2026 04:10:03 +0000 (0:00:04.244) 0:00:47.603 ***** 2026-02-05 04:10:17.643357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 04:10:17.643366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 04:10:17.643373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 04:10:17.643392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:10:17.643417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:10:17.643429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:10:17.643436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:10:17.643442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:10:17.643448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:10:17.643454 | orchestrator | 2026-02-05 04:10:17.643460 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-05 04:10:17.643466 | orchestrator | Thursday 05 February 2026 04:10:08 +0000 (0:00:04.914) 0:00:52.517 ***** 2026-02-05 04:10:17.643472 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-05 04:10:17.643480 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-05 04:10:17.643485 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-05 04:10:17.643491 | orchestrator | 2026-02-05 04:10:17.643497 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-05 04:10:17.643504 | orchestrator | Thursday 05 February 2026 04:10:11 +0000 (0:00:02.744) 0:00:55.261 ***** 2026-02-05 04:10:17.643510 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-05 04:10:17.643516 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-05 04:10:17.643522 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-05 04:10:17.643528 | orchestrator | 2026-02-05 04:10:17.643534 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-05 04:10:17.643541 | orchestrator | Thursday 05 February 2026 04:10:15 +0000 (0:00:04.436) 0:00:59.698 ***** 2026-02-05 04:10:17.643547 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:10:17.643555 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:10:17.643567 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:10:38.445152 | orchestrator | 2026-02-05 04:10:38.445287 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-05 04:10:38.445372 | orchestrator | Thursday 05 February 2026 04:10:17 +0000 (0:00:01.916) 0:01:01.614 ***** 2026-02-05 04:10:38.445389 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-05 04:10:38.445403 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-05 04:10:38.445417 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-05 04:10:38.445430 | orchestrator | 2026-02-05 04:10:38.445443 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-05 04:10:38.445513 | orchestrator | Thursday 05 February 2026 04:10:20 +0000 (0:00:03.152) 0:01:04.767 ***** 2026-02-05 04:10:38.445527 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-05 04:10:38.445537 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-05 04:10:38.445546 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-05 04:10:38.445554 | orchestrator | 2026-02-05 04:10:38.445562 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-05 04:10:38.445570 | orchestrator | Thursday 05 February 2026 04:10:23 +0000 (0:00:02.888) 0:01:07.655 ***** 2026-02-05 04:10:38.445579 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:10:38.445587 | orchestrator | 2026-02-05 04:10:38.445595 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-05 04:10:38.445603 | orchestrator | Thursday 05 February 2026 04:10:25 +0000 (0:00:01.923) 0:01:09.579 ***** 2026-02-05 04:10:38.445612 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-05 04:10:38.445621 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-05 04:10:38.445629 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-05 04:10:38.445637 | orchestrator | 2026-02-05 04:10:38.445645 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-05 04:10:38.445664 | orchestrator | Thursday 05 February 2026 04:10:28 +0000 (0:00:02.723) 0:01:12.302 ***** 2026-02-05 04:10:38.445672 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-05 04:10:38.445706 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-05 04:10:38.445736 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-05 04:10:38.445750 | orchestrator | 2026-02-05 04:10:38.445927 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-05 04:10:38.445939 | orchestrator | Thursday 05 February 2026 04:10:30 +0000 (0:00:02.551) 0:01:14.854 ***** 2026-02-05 04:10:38.445948 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:10:38.445958 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:10:38.445973 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:10:38.445986 | orchestrator | 2026-02-05 04:10:38.446000 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-05 04:10:38.446077 | orchestrator | Thursday 05 February 2026 04:10:32 +0000 (0:00:01.353) 0:01:16.207 ***** 2026-02-05 04:10:38.446094 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:10:38.446102 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:10:38.446110 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:10:38.446118 | orchestrator | 2026-02-05 04:10:38.446128 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-05 04:10:38.446142 | orchestrator | Thursday 05 February 2026 04:10:34 +0000 (0:00:02.093) 0:01:18.301 ***** 2026-02-05 04:10:38.446160 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 04:10:38.446210 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 04:10:38.446254 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 04:10:38.446270 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:10:38.446283 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:10:38.446298 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:10:38.446307 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:10:38.446326 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:10:38.446341 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:10:42.394252 | orchestrator | 2026-02-05 04:10:42.394416 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-05 04:10:42.394437 | orchestrator | Thursday 05 February 2026 04:10:38 +0000 (0:00:04.113) 0:01:22.414 ***** 2026-02-05 04:10:42.394454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 04:10:42.394470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:10:42.394483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:10:42.394495 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:10:42.394509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 04:10:42.394556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:10:42.394587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:10:42.394606 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:10:42.394659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 04:10:42.394685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:10:42.394705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:10:42.394722 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:10:42.394739 | orchestrator | 2026-02-05 04:10:42.394758 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-05 04:10:42.394777 | orchestrator | Thursday 05 February 2026 04:10:40 +0000 (0:00:01.742) 0:01:24.157 ***** 2026-02-05 04:10:42.394824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 04:10:42.394861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:10:42.394892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:10:42.394914 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:10:42.394950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 04:10:54.459468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:10:54.459621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:10:54.459641 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:10:54.459657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 04:10:54.459694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:10:54.459706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:10:54.459783 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:10:54.459833 | orchestrator | 2026-02-05 04:10:54.459847 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-05 04:10:54.459874 | orchestrator | Thursday 05 February 2026 04:10:42 +0000 (0:00:02.210) 0:01:26.367 ***** 2026-02-05 04:10:54.459887 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-05 04:10:54.459899 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-05 04:10:54.459911 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-05 04:10:54.459922 | orchestrator | 2026-02-05 04:10:54.459933 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-05 04:10:54.459944 | orchestrator | Thursday 05 February 2026 04:10:44 +0000 (0:00:02.525) 0:01:28.892 ***** 2026-02-05 04:10:54.459956 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-05 04:10:54.459967 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-05 04:10:54.459979 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-05 04:10:54.459993 | orchestrator | 2026-02-05 04:10:54.460035 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-05 04:10:54.460056 | orchestrator | Thursday 05 February 2026 04:10:47 +0000 (0:00:02.629) 0:01:31.522 ***** 2026-02-05 04:10:54.460074 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 04:10:54.460094 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 04:10:54.460114 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 04:10:54.460133 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 04:10:54.460148 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:10:54.460161 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 04:10:54.460188 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:10:54.460201 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 04:10:54.460214 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:10:54.460227 | orchestrator | 2026-02-05 04:10:54.460240 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-05 04:10:54.460253 | orchestrator | Thursday 05 February 2026 04:10:50 +0000 (0:00:02.561) 0:01:34.084 ***** 2026-02-05 04:10:54.460265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 04:10:54.460277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 04:10:54.460289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 04:10:54.460306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:10:54.460328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:10:58.313273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:10:58.313394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:10:58.313409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:10:58.313421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:10:58.313431 | orchestrator | 2026-02-05 04:10:58.313443 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-05 04:10:58.313455 | orchestrator | Thursday 05 February 2026 04:10:54 +0000 (0:00:04.346) 0:01:38.430 ***** 2026-02-05 04:10:58.313467 | orchestrator | changed: [testbed-node-0] => { 2026-02-05 04:10:58.313478 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:10:58.313488 | orchestrator | } 2026-02-05 04:10:58.313498 | orchestrator | changed: [testbed-node-1] => { 2026-02-05 04:10:58.313509 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:10:58.313518 | orchestrator | } 2026-02-05 04:10:58.313528 | orchestrator | changed: [testbed-node-2] => { 2026-02-05 04:10:58.313538 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:10:58.313548 | orchestrator | } 2026-02-05 04:10:58.313558 | orchestrator | 2026-02-05 04:10:58.313568 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-05 04:10:58.313578 | orchestrator | Thursday 05 February 2026 04:10:55 +0000 (0:00:01.454) 0:01:39.884 ***** 2026-02-05 04:10:58.313588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 04:10:58.313633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:10:58.313652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:10:58.313663 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:10:58.313674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 04:10:58.313684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:10:58.313695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:10:58.313704 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:10:58.313727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 04:10:58.313738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:10:58.313764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:11:03.875581 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:11:03.875655 | orchestrator | 2026-02-05 04:11:03.875663 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-05 04:11:03.875669 | orchestrator | Thursday 05 February 2026 04:10:58 +0000 (0:00:02.393) 0:01:42.278 ***** 2026-02-05 04:11:03.875673 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:11:03.875678 | orchestrator | 2026-02-05 04:11:03.875682 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-05 04:11:03.875687 | orchestrator | Thursday 05 February 2026 04:11:00 +0000 (0:00:01.974) 0:01:44.253 ***** 2026-02-05 04:11:03.875694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:11:03.875701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 04:11:03.875717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 04:11:03.875723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 04:11:03.875751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:11:03.875757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 04:11:03.875761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 04:11:03.875766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 04:11:03.875772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:11:03.875781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 04:11:03.875788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 04:11:05.627653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 04:11:05.627722 | orchestrator | 2026-02-05 04:11:05.627729 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-05 04:11:05.627735 | orchestrator | Thursday 05 February 2026 04:11:04 +0000 (0:00:04.684) 0:01:48.938 ***** 2026-02-05 04:11:05.627741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:11:05.627749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 04:11:05.627779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 04:11:05.627784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 04:11:05.627789 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:11:05.627826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:11:05.627831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 04:11:05.627835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 04:11:05.627839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 04:11:05.627847 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:11:05.627854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:11:05.627859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 04:11:05.627866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 04:11:20.052931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 04:11:20.053047 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:11:20.053064 | orchestrator | 2026-02-05 04:11:20.053076 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-05 04:11:20.053087 | orchestrator | Thursday 05 February 2026 04:11:06 +0000 (0:00:01.728) 0:01:50.667 ***** 2026-02-05 04:11:20.053099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:11:20.053112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:11:20.053146 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:11:20.053154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:11:20.053160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:11:20.053166 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:11:20.053179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:11:20.053185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:11:20.053191 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:11:20.053197 | orchestrator | 2026-02-05 04:11:20.053203 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-05 04:11:20.053209 | orchestrator | Thursday 05 February 2026 04:11:08 +0000 (0:00:02.234) 0:01:52.901 ***** 2026-02-05 04:11:20.053215 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:11:20.053222 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:11:20.053228 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:11:20.053233 | orchestrator | 2026-02-05 04:11:20.053239 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-05 04:11:20.053246 | orchestrator | Thursday 05 February 2026 04:11:11 +0000 (0:00:02.320) 0:01:55.221 ***** 2026-02-05 04:11:20.053255 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:11:20.053264 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:11:20.053273 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:11:20.053282 | orchestrator | 2026-02-05 04:11:20.053292 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-05 04:11:20.053302 | orchestrator | Thursday 05 February 2026 04:11:14 +0000 (0:00:02.775) 0:01:57.997 ***** 2026-02-05 04:11:20.053312 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:11:20.053321 | orchestrator | 2026-02-05 04:11:20.053331 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-05 04:11:20.053340 | orchestrator | Thursday 05 February 2026 04:11:15 +0000 (0:00:01.634) 0:01:59.632 ***** 2026-02-05 04:11:20.053371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:11:20.053385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 04:11:20.053404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:11:20.053418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:11:20.053429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 04:11:20.053440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:11:20.053460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:11:21.715481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 04:11:21.715607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:11:21.715626 | orchestrator | 2026-02-05 04:11:21.715641 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-05 04:11:21.715673 | orchestrator | Thursday 05 February 2026 04:11:20 +0000 (0:00:04.393) 0:02:04.025 ***** 2026-02-05 04:11:21.715689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:11:21.715703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 04:11:21.715739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:11:21.715751 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:11:21.715787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:11:21.715865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 04:11:21.715877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:11:21.715889 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:11:21.715901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:11:21.715922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 04:11:21.715943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:11:37.596865 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:11:37.596979 | orchestrator | 2026-02-05 04:11:37.596994 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-05 04:11:37.597005 | orchestrator | Thursday 05 February 2026 04:11:21 +0000 (0:00:01.658) 0:02:05.684 ***** 2026-02-05 04:11:37.597017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:11:37.597048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:11:37.597061 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:11:37.597072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:11:37.597083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:11:37.597093 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:11:37.597104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:11:37.597115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:11:37.597125 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:11:37.597134 | orchestrator | 2026-02-05 04:11:37.597145 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-05 04:11:37.597155 | orchestrator | Thursday 05 February 2026 04:11:23 +0000 (0:00:01.877) 0:02:07.561 ***** 2026-02-05 04:11:37.597188 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:11:37.597199 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:11:37.597208 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:11:37.597218 | orchestrator | 2026-02-05 04:11:37.597228 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-05 04:11:37.597238 | orchestrator | Thursday 05 February 2026 04:11:25 +0000 (0:00:02.217) 0:02:09.778 ***** 2026-02-05 04:11:37.597248 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:11:37.597257 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:11:37.597267 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:11:37.597277 | orchestrator | 2026-02-05 04:11:37.597287 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-05 04:11:37.597297 | orchestrator | Thursday 05 February 2026 04:11:28 +0000 (0:00:02.726) 0:02:12.505 ***** 2026-02-05 04:11:37.597307 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:11:37.597317 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:11:37.597327 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:11:37.597337 | orchestrator | 2026-02-05 04:11:37.597346 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-05 04:11:37.597355 | orchestrator | Thursday 05 February 2026 04:11:29 +0000 (0:00:01.324) 0:02:13.830 ***** 2026-02-05 04:11:37.597365 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:11:37.597376 | orchestrator | 2026-02-05 04:11:37.597386 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-05 04:11:37.597396 | orchestrator | Thursday 05 February 2026 04:11:31 +0000 (0:00:01.638) 0:02:15.468 ***** 2026-02-05 04:11:37.597409 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-05 04:11:37.597451 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-05 04:11:37.597464 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-05 04:11:37.597484 | orchestrator | 2026-02-05 04:11:37.597495 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-05 04:11:37.597508 | orchestrator | Thursday 05 February 2026 04:11:35 +0000 (0:00:03.560) 0:02:19.028 ***** 2026-02-05 04:11:37.597519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-05 04:11:37.597533 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:11:37.597544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-05 04:11:37.597555 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:11:37.597574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-05 04:11:49.171081 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:11:49.171191 | orchestrator | 2026-02-05 04:11:49.171222 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-05 04:11:49.171246 | orchestrator | Thursday 05 February 2026 04:11:37 +0000 (0:00:02.542) 0:02:21.570 ***** 2026-02-05 04:11:49.171286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 04:11:49.171307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 04:11:49.171353 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:11:49.171371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 04:11:49.171388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 04:11:49.171406 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:11:49.171422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 04:11:49.171437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-05 04:11:49.171452 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:11:49.171465 | orchestrator | 2026-02-05 04:11:49.171480 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-05 04:11:49.171495 | orchestrator | Thursday 05 February 2026 04:11:40 +0000 (0:00:02.723) 0:02:24.294 ***** 2026-02-05 04:11:49.171509 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:11:49.171524 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:11:49.171538 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:11:49.171553 | orchestrator | 2026-02-05 04:11:49.171569 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-05 04:11:49.171585 | orchestrator | Thursday 05 February 2026 04:11:41 +0000 (0:00:01.407) 0:02:25.701 ***** 2026-02-05 04:11:49.171602 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:11:49.171622 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:11:49.171640 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:11:49.171659 | orchestrator | 2026-02-05 04:11:49.171675 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-05 04:11:49.171689 | orchestrator | Thursday 05 February 2026 04:11:43 +0000 (0:00:02.211) 0:02:27.913 ***** 2026-02-05 04:11:49.171705 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:11:49.171720 | orchestrator | 2026-02-05 04:11:49.171737 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-05 04:11:49.171753 | orchestrator | Thursday 05 February 2026 04:11:45 +0000 (0:00:01.730) 0:02:29.644 ***** 2026-02-05 04:11:49.171842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:11:49.171885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:11:49.171903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 04:11:49.171921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 04:11:49.171940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:11:49.171980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:11:51.110071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 04:11:51.110146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 04:11:51.110157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:11:51.110165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:11:51.110172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 04:11:51.110215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 04:11:51.110223 | orchestrator | 2026-02-05 04:11:51.110230 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-05 04:11:51.110239 | orchestrator | Thursday 05 February 2026 04:11:50 +0000 (0:00:04.586) 0:02:34.230 ***** 2026-02-05 04:11:51.110247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:11:51.110253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:11:51.110260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 04:11:51.110271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 04:11:51.110277 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:11:51.110291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:12:02.509037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:12:02.509190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 04:12:02.509220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 04:12:02.509271 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:12:02.509307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:12:02.509328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:12:02.509371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 04:12:02.509385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 04:12:02.509397 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:12:02.509408 | orchestrator | 2026-02-05 04:12:02.509421 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-05 04:12:02.509434 | orchestrator | Thursday 05 February 2026 04:11:52 +0000 (0:00:01.957) 0:02:36.188 ***** 2026-02-05 04:12:02.509448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:12:02.509464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:12:02.509487 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:12:02.509501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:12:02.509515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:12:02.509528 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:12:02.509542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:12:02.509556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:12:02.509570 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:12:02.509583 | orchestrator | 2026-02-05 04:12:02.509597 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-05 04:12:02.509611 | orchestrator | Thursday 05 February 2026 04:11:54 +0000 (0:00:02.173) 0:02:38.362 ***** 2026-02-05 04:12:02.509624 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:12:02.509638 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:12:02.509656 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:12:02.509668 | orchestrator | 2026-02-05 04:12:02.509681 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-05 04:12:02.509694 | orchestrator | Thursday 05 February 2026 04:11:56 +0000 (0:00:02.223) 0:02:40.585 ***** 2026-02-05 04:12:02.509707 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:12:02.509720 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:12:02.509734 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:12:02.509747 | orchestrator | 2026-02-05 04:12:02.509760 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-05 04:12:02.509778 | orchestrator | Thursday 05 February 2026 04:11:59 +0000 (0:00:02.916) 0:02:43.502 ***** 2026-02-05 04:12:02.509825 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:12:02.509844 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:12:02.509862 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:12:02.509880 | orchestrator | 2026-02-05 04:12:02.509896 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-05 04:12:02.509914 | orchestrator | Thursday 05 February 2026 04:12:01 +0000 (0:00:01.541) 0:02:45.044 ***** 2026-02-05 04:12:02.509933 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:12:02.509952 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:12:02.509984 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:12:07.715121 | orchestrator | 2026-02-05 04:12:07.715256 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-05 04:12:07.715285 | orchestrator | Thursday 05 February 2026 04:12:02 +0000 (0:00:01.438) 0:02:46.483 ***** 2026-02-05 04:12:07.715306 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:12:07.715326 | orchestrator | 2026-02-05 04:12:07.715346 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-05 04:12:07.715358 | orchestrator | Thursday 05 February 2026 04:12:04 +0000 (0:00:01.769) 0:02:48.253 ***** 2026-02-05 04:12:07.715375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:12:07.715419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 04:12:07.715434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 04:12:07.715461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 04:12:07.715494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:12:07.715507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 04:12:07.715527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 04:12:07.715539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:12:07.715550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 04:12:07.715566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 04:12:07.715578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 04:12:07.715598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 04:12:09.739205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:12:09.739336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 04:12:09.739364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:12:09.739415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 04:12:09.739439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 04:12:09.739477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 04:12:09.739510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 04:12:09.739522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:12:09.739533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 04:12:09.739545 | orchestrator | 2026-02-05 04:12:09.739558 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-05 04:12:09.739570 | orchestrator | Thursday 05 February 2026 04:12:09 +0000 (0:00:04.864) 0:02:53.118 ***** 2026-02-05 04:12:09.739587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:12:09.739600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 04:12:09.739628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 04:12:10.958997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 04:12:10.959153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 04:12:10.959207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:12:10.959230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:12:10.959283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 04:12:10.959329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 04:12:10.959346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 04:12:10.959363 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:12:10.959381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 04:12:10.959396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 04:12:10.959421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:12:10.959448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 04:12:10.959465 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:12:10.959487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:12:25.633846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 04:12:25.633964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 04:12:25.633978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 04:12:25.634004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 04:12:25.634099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:12:25.634117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 04:12:25.634133 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:12:25.634152 | orchestrator | 2026-02-05 04:12:25.634167 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-05 04:12:25.634182 | orchestrator | Thursday 05 February 2026 04:12:10 +0000 (0:00:01.818) 0:02:54.936 ***** 2026-02-05 04:12:25.634221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:12:25.634241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:12:25.634258 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:12:25.634273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:12:25.634288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:12:25.634301 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:12:25.634310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:12:25.634319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:12:25.634327 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:12:25.634336 | orchestrator | 2026-02-05 04:12:25.634345 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-05 04:12:25.634354 | orchestrator | Thursday 05 February 2026 04:12:12 +0000 (0:00:01.953) 0:02:56.889 ***** 2026-02-05 04:12:25.634373 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:12:25.634383 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:12:25.634391 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:12:25.634400 | orchestrator | 2026-02-05 04:12:25.634409 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-05 04:12:25.634417 | orchestrator | Thursday 05 February 2026 04:12:15 +0000 (0:00:02.253) 0:02:59.143 ***** 2026-02-05 04:12:25.634426 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:12:25.634435 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:12:25.634444 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:12:25.634453 | orchestrator | 2026-02-05 04:12:25.634462 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-05 04:12:25.634471 | orchestrator | Thursday 05 February 2026 04:12:17 +0000 (0:00:02.746) 0:03:01.890 ***** 2026-02-05 04:12:25.634480 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:12:25.634489 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:12:25.634498 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:12:25.634506 | orchestrator | 2026-02-05 04:12:25.634515 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-05 04:12:25.634524 | orchestrator | Thursday 05 February 2026 04:12:19 +0000 (0:00:01.321) 0:03:03.212 ***** 2026-02-05 04:12:25.634533 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:12:25.634542 | orchestrator | 2026-02-05 04:12:25.634550 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-05 04:12:25.634559 | orchestrator | Thursday 05 February 2026 04:12:21 +0000 (0:00:01.778) 0:03:04.990 ***** 2026-02-05 04:12:25.634580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 04:12:26.738380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 04:12:26.738530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 04:12:26.738574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 04:12:26.738603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 04:12:26.738626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 04:12:29.808219 | orchestrator | 2026-02-05 04:12:29.808322 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-05 04:12:29.808339 | orchestrator | Thursday 05 February 2026 04:12:26 +0000 (0:00:05.727) 0:03:10.717 ***** 2026-02-05 04:12:29.808376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 04:12:29.808394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 04:12:29.808469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 04:12:29.808486 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:12:29.808501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 04:12:29.808538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 04:12:46.714290 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:12:46.714383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 04:12:46.714412 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:12:46.714420 | orchestrator | 2026-02-05 04:12:46.714428 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-05 04:12:46.714435 | orchestrator | Thursday 05 February 2026 04:12:30 +0000 (0:00:04.210) 0:03:14.928 ***** 2026-02-05 04:12:46.714443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 04:12:46.714462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 04:12:46.714470 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:12:46.714477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 04:12:46.714497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 04:12:46.714504 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:12:46.714511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 04:12:46.714517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 04:12:46.714528 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:12:46.714535 | orchestrator | 2026-02-05 04:12:46.714541 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-05 04:12:46.714548 | orchestrator | Thursday 05 February 2026 04:12:34 +0000 (0:00:03.948) 0:03:18.877 ***** 2026-02-05 04:12:46.714554 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:12:46.714561 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:12:46.714567 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:12:46.714573 | orchestrator | 2026-02-05 04:12:46.714579 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-05 04:12:46.714585 | orchestrator | Thursday 05 February 2026 04:12:37 +0000 (0:00:02.146) 0:03:21.023 ***** 2026-02-05 04:12:46.714592 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:12:46.714598 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:12:46.714604 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:12:46.714610 | orchestrator | 2026-02-05 04:12:46.714616 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-05 04:12:46.714622 | orchestrator | Thursday 05 February 2026 04:12:39 +0000 (0:00:02.699) 0:03:23.722 ***** 2026-02-05 04:12:46.714628 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:12:46.714634 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:12:46.714640 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:12:46.714646 | orchestrator | 2026-02-05 04:12:46.714653 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-05 04:12:46.714659 | orchestrator | Thursday 05 February 2026 04:12:41 +0000 (0:00:01.311) 0:03:25.034 ***** 2026-02-05 04:12:46.714665 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:12:46.714671 | orchestrator | 2026-02-05 04:12:46.714677 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-05 04:12:46.714683 | orchestrator | Thursday 05 February 2026 04:12:42 +0000 (0:00:01.626) 0:03:26.661 ***** 2026-02-05 04:12:46.714693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:12:46.714706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:13:02.709305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:13:02.709450 | orchestrator | 2026-02-05 04:13:02.709468 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-05 04:13:02.709481 | orchestrator | Thursday 05 February 2026 04:12:46 +0000 (0:00:04.027) 0:03:30.688 ***** 2026-02-05 04:13:02.709494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:13:02.709506 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:02.709519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:13:02.709531 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:02.709568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:13:02.709580 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:02.709591 | orchestrator | 2026-02-05 04:13:02.709603 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-05 04:13:02.709614 | orchestrator | Thursday 05 February 2026 04:12:48 +0000 (0:00:01.491) 0:03:32.180 ***** 2026-02-05 04:13:02.709628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:02.709643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:02.709656 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:02.709691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:02.709712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:02.709789 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:02.709825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:02.709871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:02.709884 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:02.709898 | orchestrator | 2026-02-05 04:13:02.709911 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-05 04:13:02.709924 | orchestrator | Thursday 05 February 2026 04:12:49 +0000 (0:00:01.404) 0:03:33.584 ***** 2026-02-05 04:13:02.709937 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:13:02.709951 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:13:02.709965 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:13:02.709978 | orchestrator | 2026-02-05 04:13:02.709992 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-05 04:13:02.710005 | orchestrator | Thursday 05 February 2026 04:12:51 +0000 (0:00:02.114) 0:03:35.699 ***** 2026-02-05 04:13:02.710081 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:13:02.710094 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:13:02.710105 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:13:02.710116 | orchestrator | 2026-02-05 04:13:02.710127 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-05 04:13:02.710138 | orchestrator | Thursday 05 February 2026 04:12:54 +0000 (0:00:02.771) 0:03:38.470 ***** 2026-02-05 04:13:02.710149 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:02.710160 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:02.710171 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:02.710182 | orchestrator | 2026-02-05 04:13:02.710193 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-05 04:13:02.710204 | orchestrator | Thursday 05 February 2026 04:12:55 +0000 (0:00:01.444) 0:03:39.915 ***** 2026-02-05 04:13:02.710215 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:13:02.710226 | orchestrator | 2026-02-05 04:13:02.710237 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-05 04:13:02.710248 | orchestrator | Thursday 05 February 2026 04:12:57 +0000 (0:00:01.654) 0:03:41.570 ***** 2026-02-05 04:13:02.710317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 04:13:04.466944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 04:13:04.467100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 04:13:04.467162 | orchestrator | 2026-02-05 04:13:04.467185 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-05 04:13:04.467207 | orchestrator | Thursday 05 February 2026 04:13:02 +0000 (0:00:05.112) 0:03:46.682 ***** 2026-02-05 04:13:04.467239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 04:13:04.467273 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:04.467312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 04:13:13.175587 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:13.175718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 04:13:13.175764 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:13.175775 | orchestrator | 2026-02-05 04:13:13.175786 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-05 04:13:13.175798 | orchestrator | Thursday 05 February 2026 04:13:04 +0000 (0:00:01.758) 0:03:48.441 ***** 2026-02-05 04:13:13.175926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-05 04:13:13.175942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 04:13:13.175955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-05 04:13:13.175966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 04:13:13.175977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-05 04:13:13.175988 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:13.176019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-05 04:13:13.176032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 04:13:13.176046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-05 04:13:13.176064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 04:13:13.176097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-05 04:13:13.176122 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:13.176147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-05 04:13:13.176164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 04:13:13.176181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-05 04:13:13.176197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 04:13:13.176212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-05 04:13:13.176229 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:13.176244 | orchestrator | 2026-02-05 04:13:13.176260 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-05 04:13:13.176275 | orchestrator | Thursday 05 February 2026 04:13:06 +0000 (0:00:01.994) 0:03:50.436 ***** 2026-02-05 04:13:13.176290 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:13:13.176307 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:13:13.176323 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:13:13.176340 | orchestrator | 2026-02-05 04:13:13.176355 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-05 04:13:13.176372 | orchestrator | Thursday 05 February 2026 04:13:08 +0000 (0:00:02.288) 0:03:52.725 ***** 2026-02-05 04:13:13.176387 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:13:13.176396 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:13:13.176406 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:13:13.176415 | orchestrator | 2026-02-05 04:13:13.176425 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-05 04:13:13.176437 | orchestrator | Thursday 05 February 2026 04:13:11 +0000 (0:00:02.844) 0:03:55.570 ***** 2026-02-05 04:13:13.176453 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:13.176480 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:13.176498 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:13.176514 | orchestrator | 2026-02-05 04:13:13.176529 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-05 04:13:13.176544 | orchestrator | Thursday 05 February 2026 04:13:12 +0000 (0:00:01.350) 0:03:56.920 ***** 2026-02-05 04:13:13.176646 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:22.198532 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:22.198632 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:22.198642 | orchestrator | 2026-02-05 04:13:22.198652 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-05 04:13:22.198681 | orchestrator | Thursday 05 February 2026 04:13:14 +0000 (0:00:01.340) 0:03:58.261 ***** 2026-02-05 04:13:22.198690 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:13:22.198696 | orchestrator | 2026-02-05 04:13:22.198702 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-05 04:13:22.198708 | orchestrator | Thursday 05 February 2026 04:13:16 +0000 (0:00:01.935) 0:04:00.196 ***** 2026-02-05 04:13:22.198719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-05 04:13:22.198742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 04:13:22.198751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 04:13:22.198759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-05 04:13:22.198781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-05 04:13:22.198796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 04:13:22.198846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 04:13:22.198856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 04:13:22.198863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 04:13:22.198870 | orchestrator | 2026-02-05 04:13:22.198877 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-05 04:13:22.198884 | orchestrator | Thursday 05 February 2026 04:13:20 +0000 (0:00:04.243) 0:04:04.440 ***** 2026-02-05 04:13:22.198898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-05 04:13:23.895089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 04:13:23.895221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 04:13:23.895240 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:23.895256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-05 04:13:23.895272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 04:13:23.895409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 04:13:23.895425 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:23.895460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-05 04:13:23.895481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 04:13:23.895494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 04:13:23.895505 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:23.895516 | orchestrator | 2026-02-05 04:13:23.895529 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-05 04:13:23.895541 | orchestrator | Thursday 05 February 2026 04:13:22 +0000 (0:00:01.736) 0:04:06.177 ***** 2026-02-05 04:13:23.895554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-05 04:13:23.895572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-05 04:13:23.895612 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:23.895632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-05 04:13:23.895651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-05 04:13:23.895670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-05 04:13:23.895690 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:23.895710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-05 04:13:23.895730 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:23.895749 | orchestrator | 2026-02-05 04:13:23.895768 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-05 04:13:23.895792 | orchestrator | Thursday 05 February 2026 04:13:23 +0000 (0:00:01.688) 0:04:07.866 ***** 2026-02-05 04:13:38.468740 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:13:38.468893 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:13:38.468907 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:13:38.468915 | orchestrator | 2026-02-05 04:13:38.468924 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-05 04:13:38.468932 | orchestrator | Thursday 05 February 2026 04:13:26 +0000 (0:00:02.148) 0:04:10.014 ***** 2026-02-05 04:13:38.468940 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:13:38.468947 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:13:38.468953 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:13:38.468960 | orchestrator | 2026-02-05 04:13:38.468968 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-05 04:13:38.468975 | orchestrator | Thursday 05 February 2026 04:13:28 +0000 (0:00:02.851) 0:04:12.865 ***** 2026-02-05 04:13:38.468982 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:38.468990 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:38.468997 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:38.469004 | orchestrator | 2026-02-05 04:13:38.469011 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-05 04:13:38.469019 | orchestrator | Thursday 05 February 2026 04:13:30 +0000 (0:00:01.315) 0:04:14.181 ***** 2026-02-05 04:13:38.469026 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:13:38.469033 | orchestrator | 2026-02-05 04:13:38.469041 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-05 04:13:38.469047 | orchestrator | Thursday 05 February 2026 04:13:31 +0000 (0:00:01.718) 0:04:15.899 ***** 2026-02-05 04:13:38.469074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:13:38.469109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 04:13:38.469119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:13:38.469143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 04:13:38.469155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:13:38.469168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 04:13:38.469175 | orchestrator | 2026-02-05 04:13:38.469182 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-05 04:13:38.469190 | orchestrator | Thursday 05 February 2026 04:13:36 +0000 (0:00:04.857) 0:04:20.757 ***** 2026-02-05 04:13:38.469199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:13:38.469211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 04:13:50.612779 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:50.612986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:13:50.613030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 04:13:50.613042 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:50.613052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:13:50.613062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 04:13:50.613071 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:50.613080 | orchestrator | 2026-02-05 04:13:50.613091 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-05 04:13:50.613101 | orchestrator | Thursday 05 February 2026 04:13:38 +0000 (0:00:01.687) 0:04:22.444 ***** 2026-02-05 04:13:50.613127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:50.613139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:50.613149 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:50.613158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:50.613167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:50.613186 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:50.613195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:50.613204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:50.613213 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:50.613221 | orchestrator | 2026-02-05 04:13:50.613230 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-05 04:13:50.613239 | orchestrator | Thursday 05 February 2026 04:13:40 +0000 (0:00:01.871) 0:04:24.316 ***** 2026-02-05 04:13:50.613247 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:13:50.613257 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:13:50.613265 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:13:50.613273 | orchestrator | 2026-02-05 04:13:50.613282 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-05 04:13:50.613291 | orchestrator | Thursday 05 February 2026 04:13:42 +0000 (0:00:02.242) 0:04:26.558 ***** 2026-02-05 04:13:50.613299 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:13:50.613308 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:13:50.613317 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:13:50.613331 | orchestrator | 2026-02-05 04:13:50.613353 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-05 04:13:50.613367 | orchestrator | Thursday 05 February 2026 04:13:45 +0000 (0:00:02.872) 0:04:29.431 ***** 2026-02-05 04:13:50.613381 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:13:50.613395 | orchestrator | 2026-02-05 04:13:50.613409 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-05 04:13:50.613421 | orchestrator | Thursday 05 February 2026 04:13:47 +0000 (0:00:01.953) 0:04:31.384 ***** 2026-02-05 04:13:50.613435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:13:50.613460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:13:52.211637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:13:52.211781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:13:52.211808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 04:13:52.211914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 04:13:52.211926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 04:13:52.211937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 04:13:52.212002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:13:52.212015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:13:52.212025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 04:13:52.212035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 04:13:52.212047 | orchestrator | 2026-02-05 04:13:52.212061 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-05 04:13:52.212079 | orchestrator | Thursday 05 February 2026 04:13:51 +0000 (0:00:04.281) 0:04:35.666 ***** 2026-02-05 04:13:52.212097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:13:52.212134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:13:55.290190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 04:13:55.290298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 04:13:55.290317 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:55.290333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:13:55.290346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:13:55.291304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 04:13:55.291409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 04:13:55.291435 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:13:55.291465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:13:55.291485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:13:55.291505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 04:13:55.291524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 04:13:55.291562 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:13:55.291582 | orchestrator | 2026-02-05 04:13:55.291600 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-05 04:13:55.291618 | orchestrator | Thursday 05 February 2026 04:13:53 +0000 (0:00:01.619) 0:04:37.286 ***** 2026-02-05 04:13:55.291637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:55.291658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:55.291675 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:13:55.291691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:13:55.291724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:14:10.680539 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:14:10.680636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:14:10.680651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:14:10.680677 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:14:10.680685 | orchestrator | 2026-02-05 04:14:10.680694 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-05 04:14:10.680704 | orchestrator | Thursday 05 February 2026 04:13:55 +0000 (0:00:01.972) 0:04:39.258 ***** 2026-02-05 04:14:10.680712 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:14:10.680720 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:14:10.680728 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:14:10.680735 | orchestrator | 2026-02-05 04:14:10.680743 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-05 04:14:10.680751 | orchestrator | Thursday 05 February 2026 04:13:57 +0000 (0:00:02.228) 0:04:41.487 ***** 2026-02-05 04:14:10.680759 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:14:10.680766 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:14:10.680773 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:14:10.680780 | orchestrator | 2026-02-05 04:14:10.680788 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-05 04:14:10.680795 | orchestrator | Thursday 05 February 2026 04:14:00 +0000 (0:00:02.851) 0:04:44.339 ***** 2026-02-05 04:14:10.680803 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:14:10.680810 | orchestrator | 2026-02-05 04:14:10.680846 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-05 04:14:10.680854 | orchestrator | Thursday 05 February 2026 04:14:02 +0000 (0:00:02.398) 0:04:46.737 ***** 2026-02-05 04:14:10.680861 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:14:10.680889 | orchestrator | 2026-02-05 04:14:10.680897 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-05 04:14:10.680904 | orchestrator | Thursday 05 February 2026 04:14:06 +0000 (0:00:04.241) 0:04:50.978 ***** 2026-02-05 04:14:10.680915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:14:10.680941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 04:14:10.680950 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:14:10.680963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:14:10.680978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 04:14:10.680985 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:14:10.680999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:14:14.196145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 04:14:14.196255 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:14:14.196272 | orchestrator | 2026-02-05 04:14:14.196283 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-05 04:14:14.196313 | orchestrator | Thursday 05 February 2026 04:14:10 +0000 (0:00:03.673) 0:04:54.652 ***** 2026-02-05 04:14:14.196342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:14:14.196355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 04:14:14.196364 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:14:14.196400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:14:14.196419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 04:14:14.196428 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:14:14.196438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:14:14.196458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 04:14:29.877938 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:14:29.878157 | orchestrator | 2026-02-05 04:14:29.878188 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-05 04:14:29.878208 | orchestrator | Thursday 05 February 2026 04:14:14 +0000 (0:00:03.516) 0:04:58.168 ***** 2026-02-05 04:14:29.878256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 04:14:29.878274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 04:14:29.878284 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:14:29.878295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 04:14:29.878306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 04:14:29.878315 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:14:29.878325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 04:14:29.878336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 04:14:29.878346 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:14:29.878357 | orchestrator | 2026-02-05 04:14:29.878367 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-05 04:14:29.878384 | orchestrator | Thursday 05 February 2026 04:14:18 +0000 (0:00:03.842) 0:05:02.011 ***** 2026-02-05 04:14:29.878394 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:14:29.878441 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:14:29.878453 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:14:29.878464 | orchestrator | 2026-02-05 04:14:29.878475 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-05 04:14:29.878487 | orchestrator | Thursday 05 February 2026 04:14:20 +0000 (0:00:02.956) 0:05:04.967 ***** 2026-02-05 04:14:29.878498 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:14:29.878510 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:14:29.878521 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:14:29.878532 | orchestrator | 2026-02-05 04:14:29.878544 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-05 04:14:29.878555 | orchestrator | Thursday 05 February 2026 04:14:23 +0000 (0:00:02.608) 0:05:07.576 ***** 2026-02-05 04:14:29.878566 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:14:29.878577 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:14:29.878588 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:14:29.878600 | orchestrator | 2026-02-05 04:14:29.878611 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-05 04:14:29.878622 | orchestrator | Thursday 05 February 2026 04:14:24 +0000 (0:00:01.407) 0:05:08.983 ***** 2026-02-05 04:14:29.878634 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:14:29.878645 | orchestrator | 2026-02-05 04:14:29.878656 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-05 04:14:29.878667 | orchestrator | Thursday 05 February 2026 04:14:27 +0000 (0:00:02.166) 0:05:11.150 ***** 2026-02-05 04:14:29.878680 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 04:14:29.878694 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 04:14:29.878707 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 04:14:29.878725 | orchestrator | 2026-02-05 04:14:29.878737 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-05 04:14:29.878750 | orchestrator | Thursday 05 February 2026 04:14:29 +0000 (0:00:02.496) 0:05:13.646 ***** 2026-02-05 04:14:29.878773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 04:14:44.419042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 04:14:44.419174 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:14:44.419191 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:14:44.419201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 04:14:44.419211 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:14:44.419220 | orchestrator | 2026-02-05 04:14:44.419230 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-05 04:14:44.419241 | orchestrator | Thursday 05 February 2026 04:14:31 +0000 (0:00:01.676) 0:05:15.322 ***** 2026-02-05 04:14:44.419251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-05 04:14:44.419262 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:14:44.419271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-05 04:14:44.419280 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:14:44.419289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-05 04:14:44.419319 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:14:44.419328 | orchestrator | 2026-02-05 04:14:44.419337 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-05 04:14:44.419346 | orchestrator | Thursday 05 February 2026 04:14:32 +0000 (0:00:01.413) 0:05:16.736 ***** 2026-02-05 04:14:44.419354 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:14:44.419363 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:14:44.419372 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:14:44.419380 | orchestrator | 2026-02-05 04:14:44.419389 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-05 04:14:44.419398 | orchestrator | Thursday 05 February 2026 04:14:34 +0000 (0:00:01.507) 0:05:18.243 ***** 2026-02-05 04:14:44.419407 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:14:44.419415 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:14:44.419424 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:14:44.419432 | orchestrator | 2026-02-05 04:14:44.419441 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-05 04:14:44.419450 | orchestrator | Thursday 05 February 2026 04:14:36 +0000 (0:00:02.212) 0:05:20.456 ***** 2026-02-05 04:14:44.419458 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:14:44.419467 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:14:44.419475 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:14:44.419484 | orchestrator | 2026-02-05 04:14:44.419493 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-05 04:14:44.419501 | orchestrator | Thursday 05 February 2026 04:14:38 +0000 (0:00:01.572) 0:05:22.029 ***** 2026-02-05 04:14:44.419510 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:14:44.419519 | orchestrator | 2026-02-05 04:14:44.419541 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-05 04:14:44.419551 | orchestrator | Thursday 05 February 2026 04:14:40 +0000 (0:00:02.013) 0:05:24.042 ***** 2026-02-05 04:14:44.419579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:14:44.419593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:44.419612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-05 04:14:44.419625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-05 04:14:44.419649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:44.546635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:44.546767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:44.546789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 04:14:44.546865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 04:14:44.546881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:44.546909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-05 04:14:44.546942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:44.546957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:14:44.546980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:44.546993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:44.547010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 04:14:44.547045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-05 04:14:44.654774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 04:14:44.654980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-05 04:14:44.655001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:44.655075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:44.655109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:14:44.655175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:44.655202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:44.655214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 04:14:44.655227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-05 04:14:44.655241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 04:14:44.655255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-05 04:14:45.908685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:45.908783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:45.908931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-05 04:14:45.908959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:45.908971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:45.908981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:45.909031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:45.909042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 04:14:45.909055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 04:14:45.909072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 04:14:45.909083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 04:14:45.909101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:45.909119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-05 04:14:46.969571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:46.969666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:46.969696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 04:14:46.969707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 04:14:46.969735 | orchestrator | 2026-02-05 04:14:46.969746 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-05 04:14:46.969755 | orchestrator | Thursday 05 February 2026 04:14:45 +0000 (0:00:05.841) 0:05:29.883 ***** 2026-02-05 04:14:46.969780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:14:46.969790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:46.969799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-05 04:14:46.969813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-05 04:14:46.969892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:46.969903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:46.969920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:47.056160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 04:14:47.056259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 04:14:47.056294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:47.056331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-05 04:14:47.056347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:14:47.056379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:47.056391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:47.056409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:47.056429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-05 04:14:47.056441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 04:14:47.056462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-05 04:14:47.158355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 04:14:47.158454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:47.158490 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:14:47.158500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:47.158509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:47.158518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 04:14:47.158526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 04:14:47.158549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:47.158564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:14:47.158578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-05 04:14:47.158587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:47.158594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:47.158608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-05 04:14:48.421934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:48.422127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-05 04:14:48.422160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 04:14:48.422179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:48.422192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 04:14:48.422224 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:14:48.422256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:48.422270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:14:48.422282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-05 04:14:48.422294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 04:14:48.422308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 04:14:48.422320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-05 04:14:48.422341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-05 04:15:02.770753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 04:15:02.770936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 04:15:02.770963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 04:15:02.770978 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:15:02.770993 | orchestrator | 2026-02-05 04:15:02.771006 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-05 04:15:02.771019 | orchestrator | Thursday 05 February 2026 04:14:48 +0000 (0:00:02.507) 0:05:32.391 ***** 2026-02-05 04:15:02.771033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:02.771047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:02.771060 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:15:02.771071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:02.771108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:02.771123 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:15:02.771135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:02.771169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:02.771185 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:15:02.771196 | orchestrator | 2026-02-05 04:15:02.771207 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-05 04:15:02.771218 | orchestrator | Thursday 05 February 2026 04:14:51 +0000 (0:00:02.900) 0:05:35.291 ***** 2026-02-05 04:15:02.771239 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:15:02.771255 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:15:02.771267 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:15:02.771280 | orchestrator | 2026-02-05 04:15:02.771293 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-05 04:15:02.771306 | orchestrator | Thursday 05 February 2026 04:14:53 +0000 (0:00:02.256) 0:05:37.548 ***** 2026-02-05 04:15:02.771319 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:15:02.771331 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:15:02.771339 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:15:02.771348 | orchestrator | 2026-02-05 04:15:02.771370 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-05 04:15:02.771378 | orchestrator | Thursday 05 February 2026 04:14:56 +0000 (0:00:02.671) 0:05:40.219 ***** 2026-02-05 04:15:02.771393 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:15:02.771400 | orchestrator | 2026-02-05 04:15:02.771407 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-05 04:15:02.771415 | orchestrator | Thursday 05 February 2026 04:14:58 +0000 (0:00:02.109) 0:05:42.329 ***** 2026-02-05 04:15:02.771424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-05 04:15:02.771433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-05 04:15:02.771460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-05 04:15:18.781526 | orchestrator | 2026-02-05 04:15:18.781667 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-05 04:15:18.781694 | orchestrator | Thursday 05 February 2026 04:15:02 +0000 (0:00:04.411) 0:05:46.740 ***** 2026-02-05 04:15:18.781741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-05 04:15:18.781767 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:15:18.781790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-05 04:15:18.781968 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:15:18.781992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-05 04:15:18.782011 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:15:18.782102 | orchestrator | 2026-02-05 04:15:18.782122 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-05 04:15:18.782143 | orchestrator | Thursday 05 February 2026 04:15:04 +0000 (0:00:01.660) 0:05:48.400 ***** 2026-02-05 04:15:18.782164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-05 04:15:18.782214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-05 04:15:18.782237 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:15:18.782266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-05 04:15:18.782285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-05 04:15:18.782305 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:15:18.782324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-05 04:15:18.782343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-05 04:15:18.782362 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:15:18.782380 | orchestrator | 2026-02-05 04:15:18.782400 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-05 04:15:18.782420 | orchestrator | Thursday 05 February 2026 04:15:06 +0000 (0:00:01.788) 0:05:50.189 ***** 2026-02-05 04:15:18.782438 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:15:18.782457 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:15:18.782475 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:15:18.782493 | orchestrator | 2026-02-05 04:15:18.782513 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-05 04:15:18.782546 | orchestrator | Thursday 05 February 2026 04:15:08 +0000 (0:00:02.359) 0:05:52.549 ***** 2026-02-05 04:15:18.782564 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:15:18.782581 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:15:18.782600 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:15:18.782618 | orchestrator | 2026-02-05 04:15:18.782637 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-05 04:15:18.782656 | orchestrator | Thursday 05 February 2026 04:15:11 +0000 (0:00:02.768) 0:05:55.317 ***** 2026-02-05 04:15:18.782668 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:15:18.782679 | orchestrator | 2026-02-05 04:15:18.782690 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-05 04:15:18.782701 | orchestrator | Thursday 05 February 2026 04:15:13 +0000 (0:00:02.247) 0:05:57.564 ***** 2026-02-05 04:15:18.782713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:15:18.782744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:15:19.909360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:15:19.909488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:15:19.909508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:15:19.909524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:15:19.909558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 04:15:19.909572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:15:19.909592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 04:15:19.909605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:15:19.909699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:15:19.909721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 04:15:19.909734 | orchestrator | 2026-02-05 04:15:19.909753 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-05 04:15:19.909778 | orchestrator | Thursday 05 February 2026 04:15:19 +0000 (0:00:06.322) 0:06:03.887 ***** 2026-02-05 04:15:20.642075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:15:20.642225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:15:20.642256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:15:20.642274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 04:15:20.642292 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:15:20.642356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:15:20.642384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:15:20.642402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:15:20.642420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 04:15:20.642437 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:15:20.642455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:15:20.642494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:15:40.870200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 04:15:40.870302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 04:15:40.870314 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:15:40.870322 | orchestrator | 2026-02-05 04:15:40.870329 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-05 04:15:40.870337 | orchestrator | Thursday 05 February 2026 04:15:21 +0000 (0:00:01.859) 0:06:05.747 ***** 2026-02-05 04:15:40.870344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:40.870353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:40.870361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:40.870369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:40.870375 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:15:40.870381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:40.870388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:40.870410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:40.870436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:40.870442 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:15:40.870448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:40.870470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:40.870477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:40.870484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:15:40.870491 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:15:40.870497 | orchestrator | 2026-02-05 04:15:40.870503 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-05 04:15:40.870509 | orchestrator | Thursday 05 February 2026 04:15:24 +0000 (0:00:02.479) 0:06:08.227 ***** 2026-02-05 04:15:40.870516 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:15:40.870523 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:15:40.870530 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:15:40.870535 | orchestrator | 2026-02-05 04:15:40.870541 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-05 04:15:40.870546 | orchestrator | Thursday 05 February 2026 04:15:26 +0000 (0:00:02.253) 0:06:10.480 ***** 2026-02-05 04:15:40.870551 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:15:40.870557 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:15:40.870563 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:15:40.870569 | orchestrator | 2026-02-05 04:15:40.870574 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-05 04:15:40.870581 | orchestrator | Thursday 05 February 2026 04:15:29 +0000 (0:00:02.940) 0:06:13.421 ***** 2026-02-05 04:15:40.870587 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:15:40.870593 | orchestrator | 2026-02-05 04:15:40.870599 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-05 04:15:40.870606 | orchestrator | Thursday 05 February 2026 04:15:32 +0000 (0:00:02.621) 0:06:16.043 ***** 2026-02-05 04:15:40.870613 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-05 04:15:40.870621 | orchestrator | 2026-02-05 04:15:40.870627 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-05 04:15:40.870646 | orchestrator | Thursday 05 February 2026 04:15:33 +0000 (0:00:01.627) 0:06:17.670 ***** 2026-02-05 04:15:40.870654 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-05 04:15:40.870669 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-05 04:15:40.870680 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-05 04:15:40.870687 | orchestrator | 2026-02-05 04:15:40.870694 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-05 04:15:40.870702 | orchestrator | Thursday 05 February 2026 04:15:38 +0000 (0:00:04.982) 0:06:22.653 ***** 2026-02-05 04:15:40.870708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 04:15:40.870720 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:16:02.916243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 04:16:02.916358 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:16:02.916377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 04:16:02.916389 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:16:02.916401 | orchestrator | 2026-02-05 04:16:02.916414 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-05 04:16:02.916427 | orchestrator | Thursday 05 February 2026 04:15:40 +0000 (0:00:02.188) 0:06:24.842 ***** 2026-02-05 04:16:02.916440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 04:16:02.916455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 04:16:02.916495 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:16:02.916503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 04:16:02.916509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 04:16:02.916515 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:16:02.916522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 04:16:02.916529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 04:16:02.916535 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:16:02.916541 | orchestrator | 2026-02-05 04:16:02.916548 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-05 04:16:02.916558 | orchestrator | Thursday 05 February 2026 04:15:43 +0000 (0:00:02.329) 0:06:27.172 ***** 2026-02-05 04:16:02.916569 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:16:02.916580 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:16:02.916605 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:16:02.916617 | orchestrator | 2026-02-05 04:16:02.916627 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-05 04:16:02.916637 | orchestrator | Thursday 05 February 2026 04:15:46 +0000 (0:00:03.717) 0:06:30.889 ***** 2026-02-05 04:16:02.916648 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:16:02.916733 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:16:02.916741 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:16:02.916747 | orchestrator | 2026-02-05 04:16:02.916753 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-05 04:16:02.916780 | orchestrator | Thursday 05 February 2026 04:15:50 +0000 (0:00:03.490) 0:06:34.380 ***** 2026-02-05 04:16:02.916789 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-05 04:16:02.916797 | orchestrator | 2026-02-05 04:16:02.916804 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-05 04:16:02.916812 | orchestrator | Thursday 05 February 2026 04:15:52 +0000 (0:00:01.810) 0:06:36.190 ***** 2026-02-05 04:16:02.916837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 04:16:02.916846 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:16:02.916854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 04:16:02.916870 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:16:02.916878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 04:16:02.916885 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:16:02.916893 | orchestrator | 2026-02-05 04:16:02.916903 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-05 04:16:02.916914 | orchestrator | Thursday 05 February 2026 04:15:54 +0000 (0:00:02.498) 0:06:38.688 ***** 2026-02-05 04:16:02.916925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 04:16:02.916936 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:16:02.916948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 04:16:02.916959 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:16:02.916977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 04:16:02.916989 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:16:02.916999 | orchestrator | 2026-02-05 04:16:02.917010 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-05 04:16:02.917018 | orchestrator | Thursday 05 February 2026 04:15:57 +0000 (0:00:02.354) 0:06:41.042 ***** 2026-02-05 04:16:02.917025 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:16:02.917033 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:16:02.917040 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:16:02.917047 | orchestrator | 2026-02-05 04:16:02.917055 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-05 04:16:02.917062 | orchestrator | Thursday 05 February 2026 04:15:59 +0000 (0:00:02.279) 0:06:43.322 ***** 2026-02-05 04:16:02.917070 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:16:02.917077 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:16:02.917084 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:16:02.917092 | orchestrator | 2026-02-05 04:16:02.917099 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-05 04:16:02.917106 | orchestrator | Thursday 05 February 2026 04:16:02 +0000 (0:00:03.562) 0:06:46.884 ***** 2026-02-05 04:16:30.022412 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:16:30.022528 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:16:30.022540 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:16:30.022549 | orchestrator | 2026-02-05 04:16:30.022559 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-05 04:16:30.022568 | orchestrator | Thursday 05 February 2026 04:16:06 +0000 (0:00:03.819) 0:06:50.703 ***** 2026-02-05 04:16:30.022576 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-05 04:16:30.022586 | orchestrator | 2026-02-05 04:16:30.022595 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-05 04:16:30.022603 | orchestrator | Thursday 05 February 2026 04:16:08 +0000 (0:00:02.242) 0:06:52.946 ***** 2026-02-05 04:16:30.022616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 04:16:30.022632 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:16:30.022642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 04:16:30.022650 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:16:30.022658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 04:16:30.022666 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:16:30.022674 | orchestrator | 2026-02-05 04:16:30.022682 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-05 04:16:30.022691 | orchestrator | Thursday 05 February 2026 04:16:11 +0000 (0:00:02.352) 0:06:55.299 ***** 2026-02-05 04:16:30.022700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 04:16:30.022708 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:16:30.022729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 04:16:30.022744 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:16:30.022814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 04:16:30.022824 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:16:30.022832 | orchestrator | 2026-02-05 04:16:30.022840 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-05 04:16:30.022848 | orchestrator | Thursday 05 February 2026 04:16:13 +0000 (0:00:02.533) 0:06:57.833 ***** 2026-02-05 04:16:30.022856 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:16:30.022864 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:16:30.022872 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:16:30.022880 | orchestrator | 2026-02-05 04:16:30.022888 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-05 04:16:30.022896 | orchestrator | Thursday 05 February 2026 04:16:16 +0000 (0:00:02.770) 0:07:00.604 ***** 2026-02-05 04:16:30.022904 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:16:30.022912 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:16:30.022919 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:16:30.022927 | orchestrator | 2026-02-05 04:16:30.022935 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-05 04:16:30.022943 | orchestrator | Thursday 05 February 2026 04:16:19 +0000 (0:00:03.282) 0:07:03.886 ***** 2026-02-05 04:16:30.022953 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:16:30.022962 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:16:30.022972 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:16:30.022981 | orchestrator | 2026-02-05 04:16:30.022990 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-05 04:16:30.023000 | orchestrator | Thursday 05 February 2026 04:16:23 +0000 (0:00:04.001) 0:07:07.888 ***** 2026-02-05 04:16:30.023009 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:16:30.023019 | orchestrator | 2026-02-05 04:16:30.023028 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-05 04:16:30.023037 | orchestrator | Thursday 05 February 2026 04:16:26 +0000 (0:00:02.343) 0:07:10.231 ***** 2026-02-05 04:16:30.023048 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 04:16:30.023067 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 04:16:30.023083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 04:16:30.023101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 04:16:31.886891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 04:16:31.887005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 04:16:31.887024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 04:16:31.887038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:16:31.887094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 04:16:31.887109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:16:31.887143 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 04:16:31.887157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 04:16:31.887169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 04:16:31.887181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 04:16:31.887206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:16:31.887219 | orchestrator | 2026-02-05 04:16:31.887233 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-05 04:16:31.887245 | orchestrator | Thursday 05 February 2026 04:16:31 +0000 (0:00:05.009) 0:07:15.240 ***** 2026-02-05 04:16:31.887268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 04:16:33.059396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 04:16:33.059478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 04:16:33.059491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 04:16:33.059519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:16:33.059528 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:16:33.059551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 04:16:33.059561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 04:16:33.059585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 04:16:33.059594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 04:16:33.059603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:16:33.059616 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:16:33.059628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 04:16:33.059637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 04:16:33.059650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 04:16:49.383966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 04:16:49.384093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 04:16:49.384148 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:16:49.384166 | orchestrator | 2026-02-05 04:16:49.384181 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-05 04:16:49.384195 | orchestrator | Thursday 05 February 2026 04:16:33 +0000 (0:00:01.798) 0:07:17.039 ***** 2026-02-05 04:16:49.384210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 04:16:49.384226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 04:16:49.384241 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:16:49.384255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 04:16:49.384268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 04:16:49.384281 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:16:49.384310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 04:16:49.384326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 04:16:49.384340 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:16:49.384353 | orchestrator | 2026-02-05 04:16:49.384366 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-05 04:16:49.384378 | orchestrator | Thursday 05 February 2026 04:16:35 +0000 (0:00:02.107) 0:07:19.147 ***** 2026-02-05 04:16:49.384391 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:16:49.384405 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:16:49.384418 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:16:49.384432 | orchestrator | 2026-02-05 04:16:49.384445 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-05 04:16:49.384459 | orchestrator | Thursday 05 February 2026 04:16:37 +0000 (0:00:02.289) 0:07:21.437 ***** 2026-02-05 04:16:49.384471 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:16:49.384484 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:16:49.384498 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:16:49.384511 | orchestrator | 2026-02-05 04:16:49.384524 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-05 04:16:49.384537 | orchestrator | Thursday 05 February 2026 04:16:40 +0000 (0:00:02.853) 0:07:24.291 ***** 2026-02-05 04:16:49.384551 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:16:49.384565 | orchestrator | 2026-02-05 04:16:49.384579 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-05 04:16:49.384592 | orchestrator | Thursday 05 February 2026 04:16:42 +0000 (0:00:02.405) 0:07:26.696 ***** 2026-02-05 04:16:49.384631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:16:49.384666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:16:49.384681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:16:49.384705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:16:49.384734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:16:53.182742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:16:53.182881 | orchestrator | 2026-02-05 04:16:53.182895 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-05 04:16:53.182904 | orchestrator | Thursday 05 February 2026 04:16:49 +0000 (0:00:06.658) 0:07:33.355 ***** 2026-02-05 04:16:53.182928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:16:53.182938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-05 04:16:53.182965 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:16:53.182990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:16:53.182998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-05 04:16:53.183006 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:16:53.183018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:16:53.183027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-05 04:16:53.183041 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:16:53.183048 | orchestrator | 2026-02-05 04:16:53.183056 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-05 04:16:53.183064 | orchestrator | Thursday 05 February 2026 04:16:51 +0000 (0:00:02.089) 0:07:35.445 ***** 2026-02-05 04:16:53.183073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:16:53.183087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-05 04:17:01.968538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-05 04:17:01.968662 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:01.968681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:17:01.968695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-05 04:17:01.968729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-05 04:17:01.968752 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:01.968884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:17:01.968901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-05 04:17:01.968921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-05 04:17:01.968940 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:01.968958 | orchestrator | 2026-02-05 04:17:01.968987 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-05 04:17:01.969009 | orchestrator | Thursday 05 February 2026 04:16:53 +0000 (0:00:01.717) 0:07:37.162 ***** 2026-02-05 04:17:01.969064 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:01.969083 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:01.969096 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:01.969110 | orchestrator | 2026-02-05 04:17:01.969124 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-05 04:17:01.969137 | orchestrator | Thursday 05 February 2026 04:16:54 +0000 (0:00:01.476) 0:07:38.639 ***** 2026-02-05 04:17:01.969150 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:01.969163 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:01.969176 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:01.969189 | orchestrator | 2026-02-05 04:17:01.969202 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-05 04:17:01.969216 | orchestrator | Thursday 05 February 2026 04:16:56 +0000 (0:00:02.335) 0:07:40.975 ***** 2026-02-05 04:17:01.969229 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:17:01.969243 | orchestrator | 2026-02-05 04:17:01.969256 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-05 04:17:01.969269 | orchestrator | Thursday 05 February 2026 04:16:59 +0000 (0:00:02.439) 0:07:43.414 ***** 2026-02-05 04:17:01.969310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-05 04:17:01.969435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-05 04:17:01.969485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 04:17:01.969520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 04:17:01.969541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:01.969562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:01.969581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:01.969712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:03.905206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 04:17:03.905331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 04:17:03.905416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-05 04:17:03.905445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 04:17:03.905466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:03.905486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:03.905537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 04:17:03.905570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:17:03.905607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-05 04:17:03.905629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:03.905648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:03.905668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 04:17:03.905705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:17:05.902364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-05 04:17:05.902481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:05.902504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:05.902519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 04:17:05.902535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:17:05.902580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-05 04:17:05.902623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:05.902638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:05.902651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 04:17:05.902666 | orchestrator | 2026-02-05 04:17:05.902682 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-05 04:17:05.902697 | orchestrator | Thursday 05 February 2026 04:17:05 +0000 (0:00:05.704) 0:07:49.119 ***** 2026-02-05 04:17:05.902711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-05 04:17:05.902726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 04:17:05.902792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:06.019945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:06.020046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 04:17:06.020064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:17:06.020078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-05 04:17:06.020091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:06.020147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:06.020161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 04:17:06.020175 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:06.020188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-05 04:17:06.020202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 04:17:06.020214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:06.020226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:06.020244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 04:17:06.020269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:17:07.128779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-05 04:17:07.128868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-05 04:17:07.128900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:07.128908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 04:17:07.128925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:07.128968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:07.128975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 04:17:07.128980 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:07.128987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:07.128993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 04:17:07.128999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:17:07.129012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-05 04:17:07.129022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:18.708930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:17:18.709039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 04:17:18.709057 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:18.709071 | orchestrator | 2026-02-05 04:17:18.709084 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-05 04:17:18.709097 | orchestrator | Thursday 05 February 2026 04:17:07 +0000 (0:00:01.989) 0:07:51.108 ***** 2026-02-05 04:17:18.709110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-05 04:17:18.709151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-05 04:17:18.709166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:17:18.709178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:17:18.709191 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:18.709217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-05 04:17:18.709229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-05 04:17:18.709241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:17:18.709272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:17:18.709285 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:18.709297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-05 04:17:18.709309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-05 04:17:18.709321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:17:18.709340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-05 04:17:18.709352 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:18.709364 | orchestrator | 2026-02-05 04:17:18.709376 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-05 04:17:18.709387 | orchestrator | Thursday 05 February 2026 04:17:08 +0000 (0:00:01.810) 0:07:52.919 ***** 2026-02-05 04:17:18.709399 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:18.709411 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:18.709422 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:18.709434 | orchestrator | 2026-02-05 04:17:18.709445 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-05 04:17:18.709457 | orchestrator | Thursday 05 February 2026 04:17:10 +0000 (0:00:01.757) 0:07:54.677 ***** 2026-02-05 04:17:18.709468 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:18.709480 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:18.709491 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:18.709502 | orchestrator | 2026-02-05 04:17:18.709514 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-05 04:17:18.709525 | orchestrator | Thursday 05 February 2026 04:17:12 +0000 (0:00:01.989) 0:07:56.666 ***** 2026-02-05 04:17:18.709537 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:17:18.709548 | orchestrator | 2026-02-05 04:17:18.709560 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-05 04:17:18.709571 | orchestrator | Thursday 05 February 2026 04:17:14 +0000 (0:00:02.144) 0:07:58.810 ***** 2026-02-05 04:17:18.709590 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:17:18.709615 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:17:36.107913 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:17:36.108021 | orchestrator | 2026-02-05 04:17:36.108037 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-05 04:17:36.108048 | orchestrator | Thursday 05 February 2026 04:17:18 +0000 (0:00:03.871) 0:08:02.682 ***** 2026-02-05 04:17:36.108059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 04:17:36.108069 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:36.108094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 04:17:36.108104 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:36.108130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 04:17:36.108161 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:36.108171 | orchestrator | 2026-02-05 04:17:36.108181 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-05 04:17:36.108191 | orchestrator | Thursday 05 February 2026 04:17:20 +0000 (0:00:01.508) 0:08:04.190 ***** 2026-02-05 04:17:36.108201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-05 04:17:36.108210 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:36.108219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-05 04:17:36.108228 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:36.108237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-05 04:17:36.108246 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:36.108254 | orchestrator | 2026-02-05 04:17:36.108263 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-05 04:17:36.108272 | orchestrator | Thursday 05 February 2026 04:17:21 +0000 (0:00:01.492) 0:08:05.684 ***** 2026-02-05 04:17:36.108280 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:36.108289 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:36.108297 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:36.108306 | orchestrator | 2026-02-05 04:17:36.108315 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-05 04:17:36.108323 | orchestrator | Thursday 05 February 2026 04:17:23 +0000 (0:00:01.860) 0:08:07.544 ***** 2026-02-05 04:17:36.108332 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:36.108341 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:36.108349 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:36.108358 | orchestrator | 2026-02-05 04:17:36.108367 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-05 04:17:36.108375 | orchestrator | Thursday 05 February 2026 04:17:25 +0000 (0:00:02.270) 0:08:09.814 ***** 2026-02-05 04:17:36.108384 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:17:36.108393 | orchestrator | 2026-02-05 04:17:36.108403 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-05 04:17:36.108414 | orchestrator | Thursday 05 February 2026 04:17:28 +0000 (0:00:02.248) 0:08:12.062 ***** 2026-02-05 04:17:36.108430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-05 04:17:36.108449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-05 04:17:36.108469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-05 04:17:37.763640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-05 04:17:37.763811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-05 04:17:37.764586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-05 04:17:37.764625 | orchestrator | 2026-02-05 04:17:37.764640 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-05 04:17:37.764653 | orchestrator | Thursday 05 February 2026 04:17:36 +0000 (0:00:08.013) 0:08:20.076 ***** 2026-02-05 04:17:37.764689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-05 04:17:37.764704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-05 04:17:37.764716 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:37.764741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-05 04:17:37.764807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-05 04:17:37.764821 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:37.764843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-05 04:17:58.764432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-05 04:17:58.764578 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:58.764597 | orchestrator | 2026-02-05 04:17:58.764611 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-05 04:17:58.764639 | orchestrator | Thursday 05 February 2026 04:17:37 +0000 (0:00:01.662) 0:08:21.739 ***** 2026-02-05 04:17:58.764652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-05 04:17:58.764667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-05 04:17:58.764680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-05 04:17:58.764693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-05 04:17:58.764704 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:58.764715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-05 04:17:58.764727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-05 04:17:58.764738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-05 04:17:58.764844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-05 04:17:58.764863 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:58.764881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-05 04:17:58.764901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-05 04:17:58.764943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-05 04:17:58.764963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-05 04:17:58.764987 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:58.764998 | orchestrator | 2026-02-05 04:17:58.765009 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-05 04:17:58.765020 | orchestrator | Thursday 05 February 2026 04:17:39 +0000 (0:00:01.744) 0:08:23.484 ***** 2026-02-05 04:17:58.765031 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:17:58.765043 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:17:58.765053 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:17:58.765064 | orchestrator | 2026-02-05 04:17:58.765075 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-05 04:17:58.765086 | orchestrator | Thursday 05 February 2026 04:17:42 +0000 (0:00:02.557) 0:08:26.041 ***** 2026-02-05 04:17:58.765097 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:17:58.765108 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:17:58.765118 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:17:58.765129 | orchestrator | 2026-02-05 04:17:58.765140 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-05 04:17:58.765157 | orchestrator | Thursday 05 February 2026 04:17:44 +0000 (0:00:02.896) 0:08:28.938 ***** 2026-02-05 04:17:58.765169 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:58.765180 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:58.765191 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:58.765202 | orchestrator | 2026-02-05 04:17:58.765213 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-05 04:17:58.765224 | orchestrator | Thursday 05 February 2026 04:17:46 +0000 (0:00:01.377) 0:08:30.315 ***** 2026-02-05 04:17:58.765236 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:58.765247 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:58.765258 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:58.765268 | orchestrator | 2026-02-05 04:17:58.765279 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-05 04:17:58.765290 | orchestrator | Thursday 05 February 2026 04:17:47 +0000 (0:00:01.309) 0:08:31.625 ***** 2026-02-05 04:17:58.765301 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:58.765312 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:58.765323 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:58.765334 | orchestrator | 2026-02-05 04:17:58.765345 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-05 04:17:58.765356 | orchestrator | Thursday 05 February 2026 04:17:49 +0000 (0:00:01.708) 0:08:33.334 ***** 2026-02-05 04:17:58.765367 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:58.765378 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:58.765388 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:58.765399 | orchestrator | 2026-02-05 04:17:58.765410 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-05 04:17:58.765421 | orchestrator | Thursday 05 February 2026 04:17:50 +0000 (0:00:01.362) 0:08:34.696 ***** 2026-02-05 04:17:58.765432 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:17:58.765442 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:17:58.765453 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:17:58.765464 | orchestrator | 2026-02-05 04:17:58.765475 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-05 04:17:58.765486 | orchestrator | Thursday 05 February 2026 04:17:52 +0000 (0:00:01.350) 0:08:36.047 ***** 2026-02-05 04:17:58.765497 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:17:58.765508 | orchestrator | 2026-02-05 04:17:58.765519 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-05 04:17:58.765530 | orchestrator | Thursday 05 February 2026 04:17:54 +0000 (0:00:02.649) 0:08:38.697 ***** 2026-02-05 04:17:58.765543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 04:17:58.765572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 04:18:03.086792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 04:18:03.086874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:18:03.086883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:18:03.086888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 04:18:03.086893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:18:03.086914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:18:03.086932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 04:18:03.086938 | orchestrator | 2026-02-05 04:18:03.086944 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-05 04:18:03.086949 | orchestrator | Thursday 05 February 2026 04:17:58 +0000 (0:00:04.041) 0:08:42.739 ***** 2026-02-05 04:18:03.086955 | orchestrator | changed: [testbed-node-0] => { 2026-02-05 04:18:03.086960 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:18:03.086965 | orchestrator | } 2026-02-05 04:18:03.086969 | orchestrator | changed: [testbed-node-1] => { 2026-02-05 04:18:03.086974 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:18:03.086978 | orchestrator | } 2026-02-05 04:18:03.086982 | orchestrator | changed: [testbed-node-2] => { 2026-02-05 04:18:03.086987 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:18:03.086991 | orchestrator | } 2026-02-05 04:18:03.086995 | orchestrator | 2026-02-05 04:18:03.087000 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-05 04:18:03.087004 | orchestrator | Thursday 05 February 2026 04:18:00 +0000 (0:00:01.412) 0:08:44.152 ***** 2026-02-05 04:18:03.087012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 04:18:03.087017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:18:03.087022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:18:03.087030 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:18:03.087035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 04:18:03.087040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:18:03.087049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:20:01.573706 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:20:01.573909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 04:20:01.574135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 04:20:01.574171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 04:20:01.574207 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:20:01.574223 | orchestrator | 2026-02-05 04:20:01.574238 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-05 04:20:01.574253 | orchestrator | Thursday 05 February 2026 04:18:03 +0000 (0:00:02.899) 0:08:47.052 ***** 2026-02-05 04:20:01.574265 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:20:01.574277 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:20:01.574288 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:20:01.574300 | orchestrator | 2026-02-05 04:20:01.574313 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-05 04:20:01.574325 | orchestrator | Thursday 05 February 2026 04:18:04 +0000 (0:00:01.775) 0:08:48.827 ***** 2026-02-05 04:20:01.574340 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:20:01.574353 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:20:01.574365 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:20:01.574377 | orchestrator | 2026-02-05 04:20:01.574389 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-05 04:20:01.574402 | orchestrator | Thursday 05 February 2026 04:18:06 +0000 (0:00:01.419) 0:08:50.247 ***** 2026-02-05 04:20:01.574414 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:20:01.574426 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:20:01.574437 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:20:01.574447 | orchestrator | 2026-02-05 04:20:01.574463 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-05 04:20:01.574476 | orchestrator | Thursday 05 February 2026 04:18:13 +0000 (0:00:07.047) 0:08:57.294 ***** 2026-02-05 04:20:01.574487 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:20:01.574498 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:20:01.574509 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:20:01.574520 | orchestrator | 2026-02-05 04:20:01.574531 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-05 04:20:01.574542 | orchestrator | Thursday 05 February 2026 04:18:20 +0000 (0:00:07.462) 0:09:04.757 ***** 2026-02-05 04:20:01.574552 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:20:01.574562 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:20:01.574573 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:20:01.574584 | orchestrator | 2026-02-05 04:20:01.574595 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-05 04:20:01.574606 | orchestrator | Thursday 05 February 2026 04:18:27 +0000 (0:00:07.085) 0:09:11.843 ***** 2026-02-05 04:20:01.574618 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:20:01.574629 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:20:01.574640 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:20:01.574650 | orchestrator | 2026-02-05 04:20:01.574663 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-05 04:20:01.574675 | orchestrator | Thursday 05 February 2026 04:18:35 +0000 (0:00:07.516) 0:09:19.360 ***** 2026-02-05 04:20:01.574686 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:20:01.574697 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:20:01.574708 | orchestrator | 2026-02-05 04:20:01.574719 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-05 04:20:01.574730 | orchestrator | Thursday 05 February 2026 04:18:38 +0000 (0:00:02.764) 0:09:22.124 ***** 2026-02-05 04:20:01.574741 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:20:01.574780 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:20:01.574791 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:20:01.574801 | orchestrator | 2026-02-05 04:20:01.574836 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-05 04:20:01.574848 | orchestrator | Thursday 05 February 2026 04:18:50 +0000 (0:00:12.408) 0:09:34.533 ***** 2026-02-05 04:20:01.574872 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:20:01.574883 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:20:01.574894 | orchestrator | 2026-02-05 04:20:01.574905 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-05 04:20:01.574916 | orchestrator | Thursday 05 February 2026 04:18:54 +0000 (0:00:03.764) 0:09:38.298 ***** 2026-02-05 04:20:01.574926 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:20:01.574937 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:20:01.574948 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:20:01.574959 | orchestrator | 2026-02-05 04:20:01.574969 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-05 04:20:01.574980 | orchestrator | Thursday 05 February 2026 04:19:01 +0000 (0:00:07.189) 0:09:45.487 ***** 2026-02-05 04:20:01.574990 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:20:01.575001 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:20:01.575020 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:20:01.575030 | orchestrator | 2026-02-05 04:20:01.575041 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-05 04:20:01.575051 | orchestrator | Thursday 05 February 2026 04:19:08 +0000 (0:00:06.828) 0:09:52.316 ***** 2026-02-05 04:20:01.575061 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:20:01.575072 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:20:01.575082 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:20:01.575093 | orchestrator | 2026-02-05 04:20:01.575103 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-05 04:20:01.575114 | orchestrator | Thursday 05 February 2026 04:19:15 +0000 (0:00:06.997) 0:09:59.313 ***** 2026-02-05 04:20:01.575125 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:20:01.575136 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:20:01.575145 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:20:01.575155 | orchestrator | 2026-02-05 04:20:01.575165 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-05 04:20:01.575176 | orchestrator | Thursday 05 February 2026 04:19:22 +0000 (0:00:06.980) 0:10:06.294 ***** 2026-02-05 04:20:01.575186 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:20:01.575197 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:20:01.575207 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:20:01.575217 | orchestrator | 2026-02-05 04:20:01.575228 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-05 04:20:01.575238 | orchestrator | Thursday 05 February 2026 04:19:29 +0000 (0:00:07.678) 0:10:13.973 ***** 2026-02-05 04:20:01.575249 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:20:01.575260 | orchestrator | 2026-02-05 04:20:01.575270 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-05 04:20:01.575280 | orchestrator | Thursday 05 February 2026 04:19:33 +0000 (0:00:03.633) 0:10:17.607 ***** 2026-02-05 04:20:01.575290 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:20:01.575301 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:20:01.575311 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:20:01.575322 | orchestrator | 2026-02-05 04:20:01.575332 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-05 04:20:01.575344 | orchestrator | Thursday 05 February 2026 04:19:46 +0000 (0:00:12.541) 0:10:30.148 ***** 2026-02-05 04:20:01.575354 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:20:01.575365 | orchestrator | 2026-02-05 04:20:01.575376 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-05 04:20:01.575387 | orchestrator | Thursday 05 February 2026 04:19:50 +0000 (0:00:04.055) 0:10:34.204 ***** 2026-02-05 04:20:01.575397 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:20:01.575407 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:20:01.575418 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:20:01.575428 | orchestrator | 2026-02-05 04:20:01.575439 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-05 04:20:01.575461 | orchestrator | Thursday 05 February 2026 04:19:57 +0000 (0:00:07.080) 0:10:41.284 ***** 2026-02-05 04:20:01.575472 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:20:01.575483 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:20:01.575494 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:20:01.575504 | orchestrator | 2026-02-05 04:20:01.575514 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-05 04:20:01.575525 | orchestrator | Thursday 05 February 2026 04:19:59 +0000 (0:00:01.974) 0:10:43.259 ***** 2026-02-05 04:20:01.575536 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:20:01.575546 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:20:01.575556 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:20:01.575566 | orchestrator | 2026-02-05 04:20:01.575577 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 04:20:01.575590 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-05 04:20:01.575603 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-05 04:20:01.575613 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-05 04:20:01.575625 | orchestrator | 2026-02-05 04:20:01.575636 | orchestrator | 2026-02-05 04:20:01.575646 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 04:20:01.575658 | orchestrator | Thursday 05 February 2026 04:20:01 +0000 (0:00:02.277) 0:10:45.536 ***** 2026-02-05 04:20:01.575670 | orchestrator | =============================================================================== 2026-02-05 04:20:01.575681 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.54s 2026-02-05 04:20:01.575692 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.41s 2026-02-05 04:20:01.575703 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.01s 2026-02-05 04:20:01.575729 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.68s 2026-02-05 04:20:02.360102 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.52s 2026-02-05 04:20:02.360175 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.46s 2026-02-05 04:20:02.360182 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.19s 2026-02-05 04:20:02.360188 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.09s 2026-02-05 04:20:02.360193 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 7.08s 2026-02-05 04:20:02.360198 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.05s 2026-02-05 04:20:02.360203 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 7.00s 2026-02-05 04:20:02.360208 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.98s 2026-02-05 04:20:02.360228 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.83s 2026-02-05 04:20:02.360234 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.66s 2026-02-05 04:20:02.360239 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.32s 2026-02-05 04:20:02.360244 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.84s 2026-02-05 04:20:02.360248 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.73s 2026-02-05 04:20:02.360253 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.70s 2026-02-05 04:20:02.360258 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.30s 2026-02-05 04:20:02.360263 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.11s 2026-02-05 04:20:02.646227 | orchestrator | + osism apply -a upgrade opensearch 2026-02-05 04:20:04.681367 | orchestrator | 2026-02-05 04:20:04 | INFO  | Task 6fdb69d6-67de-49da-9167-41e0c6a8e533 (opensearch) was prepared for execution. 2026-02-05 04:20:04.681468 | orchestrator | 2026-02-05 04:20:04 | INFO  | It takes a moment until task 6fdb69d6-67de-49da-9167-41e0c6a8e533 (opensearch) has been started and output is visible here. 2026-02-05 04:20:22.620141 | orchestrator | 2026-02-05 04:20:22.620304 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 04:20:22.620328 | orchestrator | 2026-02-05 04:20:22.620349 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 04:20:22.620369 | orchestrator | Thursday 05 February 2026 04:20:10 +0000 (0:00:01.569) 0:00:01.569 ***** 2026-02-05 04:20:22.620399 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:20:22.620420 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:20:22.620439 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:20:22.620459 | orchestrator | 2026-02-05 04:20:22.620478 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 04:20:22.620499 | orchestrator | Thursday 05 February 2026 04:20:11 +0000 (0:00:01.603) 0:00:03.172 ***** 2026-02-05 04:20:22.620518 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-05 04:20:22.620536 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-05 04:20:22.620552 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-05 04:20:22.620568 | orchestrator | 2026-02-05 04:20:22.620585 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-05 04:20:22.620602 | orchestrator | 2026-02-05 04:20:22.620619 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 04:20:22.620637 | orchestrator | Thursday 05 February 2026 04:20:13 +0000 (0:00:01.689) 0:00:04.862 ***** 2026-02-05 04:20:22.620655 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:20:22.620671 | orchestrator | 2026-02-05 04:20:22.620692 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-05 04:20:22.620712 | orchestrator | Thursday 05 February 2026 04:20:16 +0000 (0:00:02.737) 0:00:07.600 ***** 2026-02-05 04:20:22.620730 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 04:20:22.620745 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 04:20:22.620790 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 04:20:22.620807 | orchestrator | 2026-02-05 04:20:22.620824 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-05 04:20:22.620842 | orchestrator | Thursday 05 February 2026 04:20:18 +0000 (0:00:02.300) 0:00:09.900 ***** 2026-02-05 04:20:22.620866 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:20:22.620915 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:20:22.620983 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:20:22.621000 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:20:22.621014 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:20:22.621034 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:20:22.621055 | orchestrator | 2026-02-05 04:20:22.621066 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 04:20:22.621086 | orchestrator | Thursday 05 February 2026 04:20:20 +0000 (0:00:02.291) 0:00:12.192 ***** 2026-02-05 04:20:22.621104 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:20:22.621122 | orchestrator | 2026-02-05 04:20:22.621150 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-05 04:20:28.290430 | orchestrator | Thursday 05 February 2026 04:20:22 +0000 (0:00:01.612) 0:00:13.805 ***** 2026-02-05 04:20:28.290540 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:20:28.290555 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:20:28.290563 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:20:28.290608 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:20:28.290644 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:20:28.290655 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:20:28.290663 | orchestrator | 2026-02-05 04:20:28.290678 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-05 04:20:28.290686 | orchestrator | Thursday 05 February 2026 04:20:26 +0000 (0:00:03.670) 0:00:17.476 ***** 2026-02-05 04:20:28.290698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:20:28.290714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-05 04:20:30.238837 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:20:30.238917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:20:30.238929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-05 04:20:30.238954 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:20:30.238972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:20:30.238991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-05 04:20:30.238997 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:20:30.239002 | orchestrator | 2026-02-05 04:20:30.239008 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-05 04:20:30.239015 | orchestrator | Thursday 05 February 2026 04:20:28 +0000 (0:00:02.004) 0:00:19.480 ***** 2026-02-05 04:20:30.239021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:20:30.239026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-05 04:20:30.239036 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:20:30.239045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:20:30.239056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-05 04:20:34.105475 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:20:34.105559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:20:34.105594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-05 04:20:34.105600 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:20:34.105604 | orchestrator | 2026-02-05 04:20:34.105610 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-05 04:20:34.105616 | orchestrator | Thursday 05 February 2026 04:20:30 +0000 (0:00:01.944) 0:00:21.424 ***** 2026-02-05 04:20:34.105620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:20:34.105634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:20:34.105639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:20:34.105650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:20:34.105657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:20:34.105669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:20:47.796375 | orchestrator | 2026-02-05 04:20:47.796503 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-05 04:20:47.796530 | orchestrator | Thursday 05 February 2026 04:20:34 +0000 (0:00:03.868) 0:00:25.293 ***** 2026-02-05 04:20:47.796550 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:20:47.796569 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:20:47.796589 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:20:47.796607 | orchestrator | 2026-02-05 04:20:47.796626 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-05 04:20:47.796645 | orchestrator | Thursday 05 February 2026 04:20:37 +0000 (0:00:03.513) 0:00:28.806 ***** 2026-02-05 04:20:47.796664 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:20:47.796683 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:20:47.796702 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:20:47.796722 | orchestrator | 2026-02-05 04:20:47.796740 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-05 04:20:47.796760 | orchestrator | Thursday 05 February 2026 04:20:40 +0000 (0:00:03.044) 0:00:31.851 ***** 2026-02-05 04:20:47.796816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:20:47.796861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:20:47.796885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-05 04:20:47.796934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:20:47.796990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:20:47.797023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-05 04:20:47.797039 | orchestrator | 2026-02-05 04:20:47.797052 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-05 04:20:47.797066 | orchestrator | Thursday 05 February 2026 04:20:44 +0000 (0:00:03.721) 0:00:35.573 ***** 2026-02-05 04:20:47.797080 | orchestrator | changed: [testbed-node-0] => { 2026-02-05 04:20:47.797095 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:20:47.797108 | orchestrator | } 2026-02-05 04:20:47.797121 | orchestrator | changed: [testbed-node-1] => { 2026-02-05 04:20:47.797134 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:20:47.797155 | orchestrator | } 2026-02-05 04:20:47.797168 | orchestrator | changed: [testbed-node-2] => { 2026-02-05 04:20:47.797181 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:20:47.797193 | orchestrator | } 2026-02-05 04:20:47.797206 | orchestrator | 2026-02-05 04:20:47.797219 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-05 04:20:47.797233 | orchestrator | Thursday 05 February 2026 04:20:45 +0000 (0:00:01.407) 0:00:36.980 ***** 2026-02-05 04:20:47.797258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:23:52.282547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-05 04:23:52.282664 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:23:52.282701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:23:52.282717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-05 04:23:52.282753 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:23:52.282783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-05 04:23:52.282797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-05 04:23:52.282815 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:23:52.282827 | orchestrator | 2026-02-05 04:23:52.282948 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 04:23:52.282968 | orchestrator | Thursday 05 February 2026 04:20:47 +0000 (0:00:02.002) 0:00:38.983 ***** 2026-02-05 04:23:52.282979 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:23:52.282990 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:23:52.283001 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:23:52.283012 | orchestrator | 2026-02-05 04:23:52.283023 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-05 04:23:52.283034 | orchestrator | Thursday 05 February 2026 04:20:49 +0000 (0:00:01.514) 0:00:40.497 ***** 2026-02-05 04:23:52.283046 | orchestrator | 2026-02-05 04:23:52.283057 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-05 04:23:52.283069 | orchestrator | Thursday 05 February 2026 04:20:49 +0000 (0:00:00.463) 0:00:40.961 ***** 2026-02-05 04:23:52.283081 | orchestrator | 2026-02-05 04:23:52.283094 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-05 04:23:52.283119 | orchestrator | Thursday 05 February 2026 04:20:50 +0000 (0:00:00.448) 0:00:41.410 ***** 2026-02-05 04:23:52.283131 | orchestrator | 2026-02-05 04:23:52.283144 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-05 04:23:52.283156 | orchestrator | Thursday 05 February 2026 04:20:50 +0000 (0:00:00.762) 0:00:42.173 ***** 2026-02-05 04:23:52.283171 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:23:52.283190 | orchestrator | 2026-02-05 04:23:52.283206 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-05 04:23:52.283218 | orchestrator | Thursday 05 February 2026 04:20:54 +0000 (0:00:03.722) 0:00:45.895 ***** 2026-02-05 04:23:52.283236 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:23:52.283254 | orchestrator | 2026-02-05 04:23:52.283272 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-05 04:23:52.283290 | orchestrator | Thursday 05 February 2026 04:21:01 +0000 (0:00:06.450) 0:00:52.345 ***** 2026-02-05 04:23:52.283308 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:23:52.283326 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:23:52.283345 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:23:52.283360 | orchestrator | 2026-02-05 04:23:52.283376 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-05 04:23:52.283394 | orchestrator | Thursday 05 February 2026 04:22:08 +0000 (0:01:07.330) 0:01:59.676 ***** 2026-02-05 04:23:52.283410 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:23:52.283427 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:23:52.283443 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:23:52.283459 | orchestrator | 2026-02-05 04:23:52.283476 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 04:23:52.283493 | orchestrator | Thursday 05 February 2026 04:23:42 +0000 (0:01:33.746) 0:03:33.423 ***** 2026-02-05 04:23:52.283509 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:23:52.283526 | orchestrator | 2026-02-05 04:23:52.283537 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-05 04:23:52.283547 | orchestrator | Thursday 05 February 2026 04:23:43 +0000 (0:00:01.671) 0:03:35.095 ***** 2026-02-05 04:23:52.283556 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:23:52.283566 | orchestrator | 2026-02-05 04:23:52.283576 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-05 04:23:52.283585 | orchestrator | Thursday 05 February 2026 04:23:47 +0000 (0:00:03.494) 0:03:38.589 ***** 2026-02-05 04:23:52.283595 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:23:52.283605 | orchestrator | 2026-02-05 04:23:52.283614 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-05 04:23:52.283624 | orchestrator | Thursday 05 February 2026 04:23:51 +0000 (0:00:03.672) 0:03:42.262 ***** 2026-02-05 04:23:52.283634 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:23:52.283643 | orchestrator | 2026-02-05 04:23:52.283653 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-05 04:23:52.283674 | orchestrator | Thursday 05 February 2026 04:23:52 +0000 (0:00:01.204) 0:03:43.466 ***** 2026-02-05 04:23:54.462288 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:23:54.462385 | orchestrator | 2026-02-05 04:23:54.462400 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 04:23:54.462413 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 04:23:54.462424 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 04:23:54.462434 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 04:23:54.462444 | orchestrator | 2026-02-05 04:23:54.462455 | orchestrator | 2026-02-05 04:23:54.462491 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 04:23:54.462502 | orchestrator | Thursday 05 February 2026 04:23:54 +0000 (0:00:01.848) 0:03:45.314 ***** 2026-02-05 04:23:54.462511 | orchestrator | =============================================================================== 2026-02-05 04:23:54.462521 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 93.75s 2026-02-05 04:23:54.462530 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.33s 2026-02-05 04:23:54.462540 | orchestrator | opensearch : Perform a flush -------------------------------------------- 6.45s 2026-02-05 04:23:54.462549 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.87s 2026-02-05 04:23:54.462559 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.72s 2026-02-05 04:23:54.462569 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.72s 2026-02-05 04:23:54.462592 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.67s 2026-02-05 04:23:54.462602 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.67s 2026-02-05 04:23:54.462612 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.51s 2026-02-05 04:23:54.462622 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.49s 2026-02-05 04:23:54.462631 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.04s 2026-02-05 04:23:54.462640 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.74s 2026-02-05 04:23:54.462650 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.30s 2026-02-05 04:23:54.462659 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.29s 2026-02-05 04:23:54.462669 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 2.00s 2026-02-05 04:23:54.462679 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.00s 2026-02-05 04:23:54.462688 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.94s 2026-02-05 04:23:54.462698 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.85s 2026-02-05 04:23:54.462708 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.69s 2026-02-05 04:23:54.462718 | orchestrator | opensearch : Flush handlers --------------------------------------------- 1.68s 2026-02-05 04:23:54.739682 | orchestrator | + osism apply -a upgrade memcached 2026-02-05 04:23:56.883620 | orchestrator | 2026-02-05 04:23:56 | INFO  | Task 00c4fbaf-32b9-4581-bbfc-957f7190b5c5 (memcached) was prepared for execution. 2026-02-05 04:23:56.883735 | orchestrator | 2026-02-05 04:23:56 | INFO  | It takes a moment until task 00c4fbaf-32b9-4581-bbfc-957f7190b5c5 (memcached) has been started and output is visible here. 2026-02-05 04:24:30.159803 | orchestrator | 2026-02-05 04:24:30.160007 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 04:24:30.160030 | orchestrator | 2026-02-05 04:24:30.160042 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 04:24:30.160053 | orchestrator | Thursday 05 February 2026 04:24:02 +0000 (0:00:01.630) 0:00:01.630 ***** 2026-02-05 04:24:30.160064 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:24:30.160076 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:24:30.160087 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:24:30.160098 | orchestrator | 2026-02-05 04:24:30.160109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 04:24:30.160120 | orchestrator | Thursday 05 February 2026 04:24:04 +0000 (0:00:01.767) 0:00:03.398 ***** 2026-02-05 04:24:30.160132 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-05 04:24:30.160143 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-05 04:24:30.160154 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-05 04:24:30.160165 | orchestrator | 2026-02-05 04:24:30.160202 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-05 04:24:30.160214 | orchestrator | 2026-02-05 04:24:30.160225 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-05 04:24:30.160235 | orchestrator | Thursday 05 February 2026 04:24:06 +0000 (0:00:02.493) 0:00:05.892 ***** 2026-02-05 04:24:30.160247 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:24:30.160258 | orchestrator | 2026-02-05 04:24:30.160269 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-05 04:24:30.160282 | orchestrator | Thursday 05 February 2026 04:24:08 +0000 (0:00:01.932) 0:00:07.824 ***** 2026-02-05 04:24:30.160295 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-05 04:24:30.160308 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-05 04:24:30.160321 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-05 04:24:30.160333 | orchestrator | 2026-02-05 04:24:30.160347 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-05 04:24:30.160359 | orchestrator | Thursday 05 February 2026 04:24:10 +0000 (0:00:01.649) 0:00:09.474 ***** 2026-02-05 04:24:30.160371 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-05 04:24:30.160384 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-05 04:24:30.160396 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-05 04:24:30.160409 | orchestrator | 2026-02-05 04:24:30.160421 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-05 04:24:30.160434 | orchestrator | Thursday 05 February 2026 04:24:13 +0000 (0:00:02.691) 0:00:12.165 ***** 2026-02-05 04:24:30.160468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 04:24:30.160486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 04:24:30.160519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 04:24:30.160543 | orchestrator | 2026-02-05 04:24:30.160556 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-05 04:24:30.160569 | orchestrator | Thursday 05 February 2026 04:24:15 +0000 (0:00:02.313) 0:00:14.479 ***** 2026-02-05 04:24:30.160582 | orchestrator | changed: [testbed-node-0] => { 2026-02-05 04:24:30.160595 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:24:30.160609 | orchestrator | } 2026-02-05 04:24:30.160621 | orchestrator | changed: [testbed-node-1] => { 2026-02-05 04:24:30.160634 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:24:30.160647 | orchestrator | } 2026-02-05 04:24:30.160658 | orchestrator | changed: [testbed-node-2] => { 2026-02-05 04:24:30.160669 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:24:30.160680 | orchestrator | } 2026-02-05 04:24:30.160691 | orchestrator | 2026-02-05 04:24:30.160702 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-05 04:24:30.160713 | orchestrator | Thursday 05 February 2026 04:24:16 +0000 (0:00:01.390) 0:00:15.869 ***** 2026-02-05 04:24:30.160744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 04:24:30.160769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 04:24:30.160781 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:24:30.160792 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:24:30.160809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 04:24:30.160821 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:24:30.160832 | orchestrator | 2026-02-05 04:24:30.160843 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-05 04:24:30.160885 | orchestrator | Thursday 05 February 2026 04:24:18 +0000 (0:00:01.955) 0:00:17.825 ***** 2026-02-05 04:24:30.160905 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:24:30.160924 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:24:30.160954 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:24:30.160971 | orchestrator | 2026-02-05 04:24:30.160983 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 04:24:30.160994 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 04:24:30.161007 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 04:24:30.161018 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 04:24:30.161029 | orchestrator | 2026-02-05 04:24:30.161040 | orchestrator | 2026-02-05 04:24:30.161051 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 04:24:30.161071 | orchestrator | Thursday 05 February 2026 04:24:30 +0000 (0:00:11.291) 0:00:29.116 ***** 2026-02-05 04:24:30.461587 | orchestrator | =============================================================================== 2026-02-05 04:24:30.461657 | orchestrator | memcached : Restart memcached container -------------------------------- 11.29s 2026-02-05 04:24:30.461663 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.69s 2026-02-05 04:24:30.461667 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.49s 2026-02-05 04:24:30.461672 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.31s 2026-02-05 04:24:30.461676 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.96s 2026-02-05 04:24:30.461680 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.93s 2026-02-05 04:24:30.461684 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.77s 2026-02-05 04:24:30.461688 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.65s 2026-02-05 04:24:30.461692 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.39s 2026-02-05 04:24:30.741482 | orchestrator | + osism apply -a upgrade redis 2026-02-05 04:24:32.751384 | orchestrator | 2026-02-05 04:24:32 | INFO  | Task 24070dcc-5700-4f4e-87ba-bb3e22e4e17e (redis) was prepared for execution. 2026-02-05 04:24:32.751509 | orchestrator | 2026-02-05 04:24:32 | INFO  | It takes a moment until task 24070dcc-5700-4f4e-87ba-bb3e22e4e17e (redis) has been started and output is visible here. 2026-02-05 04:24:44.062678 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-05 04:24:44.062806 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-05 04:24:44.062830 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-05 04:24:44.062838 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-05 04:24:44.062853 | orchestrator | 2026-02-05 04:24:44.062882 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 04:24:44.062887 | orchestrator | 2026-02-05 04:24:44.062925 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 04:24:44.062934 | orchestrator | Thursday 05 February 2026 04:24:37 +0000 (0:00:01.105) 0:00:01.105 ***** 2026-02-05 04:24:44.062942 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:24:44.062951 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:24:44.062959 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:24:44.062966 | orchestrator | 2026-02-05 04:24:44.062974 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 04:24:44.062982 | orchestrator | Thursday 05 February 2026 04:24:38 +0000 (0:00:00.651) 0:00:01.756 ***** 2026-02-05 04:24:44.062989 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-05 04:24:44.063020 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-05 04:24:44.063028 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-05 04:24:44.063034 | orchestrator | 2026-02-05 04:24:44.063041 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-05 04:24:44.063046 | orchestrator | 2026-02-05 04:24:44.063053 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-05 04:24:44.063061 | orchestrator | Thursday 05 February 2026 04:24:39 +0000 (0:00:00.831) 0:00:02.587 ***** 2026-02-05 04:24:44.063081 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:24:44.063090 | orchestrator | 2026-02-05 04:24:44.063097 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-05 04:24:44.063103 | orchestrator | Thursday 05 February 2026 04:24:40 +0000 (0:00:00.941) 0:00:03.529 ***** 2026-02-05 04:24:44.063111 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 04:24:44.063121 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 04:24:44.063128 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 04:24:44.063136 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 04:24:44.063161 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 04:24:44.063176 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 04:24:44.063185 | orchestrator | 2026-02-05 04:24:44.063197 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-05 04:24:44.063206 | orchestrator | Thursday 05 February 2026 04:24:41 +0000 (0:00:01.452) 0:00:04.981 ***** 2026-02-05 04:24:44.063211 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 04:24:44.063217 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 04:24:44.063223 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 04:24:44.063229 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 04:24:44.063242 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100188 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100287 | orchestrator | 2026-02-05 04:24:49.100296 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-05 04:24:49.100342 | orchestrator | Thursday 05 February 2026 04:24:44 +0000 (0:00:02.216) 0:00:07.198 ***** 2026-02-05 04:24:49.100352 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100363 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100370 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100384 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100423 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100429 | orchestrator | 2026-02-05 04:24:49.100436 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-05 04:24:49.100441 | orchestrator | Thursday 05 February 2026 04:24:46 +0000 (0:00:02.769) 0:00:09.968 ***** 2026-02-05 04:24:49.100451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 04:24:49.100501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 04:25:12.457804 | orchestrator | 2026-02-05 04:25:12.458009 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-05 04:25:12.458090 | orchestrator | Thursday 05 February 2026 04:24:49 +0000 (0:00:02.267) 0:00:12.236 ***** 2026-02-05 04:25:12.458104 | orchestrator | changed: [testbed-node-0] => { 2026-02-05 04:25:12.458116 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:25:12.458129 | orchestrator | } 2026-02-05 04:25:12.458141 | orchestrator | changed: [testbed-node-1] => { 2026-02-05 04:25:12.458152 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:25:12.458163 | orchestrator | } 2026-02-05 04:25:12.458174 | orchestrator | changed: [testbed-node-2] => { 2026-02-05 04:25:12.458185 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:25:12.458196 | orchestrator | } 2026-02-05 04:25:12.458207 | orchestrator | 2026-02-05 04:25:12.458219 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-05 04:25:12.458230 | orchestrator | Thursday 05 February 2026 04:24:49 +0000 (0:00:00.552) 0:00:12.788 ***** 2026-02-05 04:25:12.458244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-05 04:25:12.458259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-05 04:25:12.458272 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-05 04:25:12.458283 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-05 04:25:12.458333 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:25:12.458346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-05 04:25:12.458359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-05 04:25:12.458374 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:25:12.458488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-05 04:25:12.458518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-05 04:25:12.458532 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:25:12.458545 | orchestrator | 2026-02-05 04:25:12.458558 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-05 04:25:12.458571 | orchestrator | Thursday 05 February 2026 04:24:50 +0000 (0:00:01.070) 0:00:13.858 ***** 2026-02-05 04:25:12.458584 | orchestrator | 2026-02-05 04:25:12.458597 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-05 04:25:12.458610 | orchestrator | Thursday 05 February 2026 04:24:50 +0000 (0:00:00.073) 0:00:13.932 ***** 2026-02-05 04:25:12.458622 | orchestrator | 2026-02-05 04:25:12.458635 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-05 04:25:12.458647 | orchestrator | Thursday 05 February 2026 04:24:50 +0000 (0:00:00.071) 0:00:14.003 ***** 2026-02-05 04:25:12.458660 | orchestrator | 2026-02-05 04:25:12.458672 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-05 04:25:12.458685 | orchestrator | Thursday 05 February 2026 04:24:50 +0000 (0:00:00.070) 0:00:14.074 ***** 2026-02-05 04:25:12.458698 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:25:12.458711 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:25:12.458731 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:25:12.458741 | orchestrator | 2026-02-05 04:25:12.458752 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-05 04:25:12.458763 | orchestrator | Thursday 05 February 2026 04:25:01 +0000 (0:00:10.150) 0:00:24.224 ***** 2026-02-05 04:25:12.458773 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:25:12.458785 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:25:12.458795 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:25:12.458806 | orchestrator | 2026-02-05 04:25:12.458817 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 04:25:12.458829 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 04:25:12.458842 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 04:25:12.458852 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 04:25:12.458887 | orchestrator | 2026-02-05 04:25:12.458901 | orchestrator | 2026-02-05 04:25:12.458912 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 04:25:12.458923 | orchestrator | Thursday 05 February 2026 04:25:12 +0000 (0:00:10.991) 0:00:35.215 ***** 2026-02-05 04:25:12.458933 | orchestrator | =============================================================================== 2026-02-05 04:25:12.458944 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.99s 2026-02-05 04:25:12.458955 | orchestrator | redis : Restart redis container ---------------------------------------- 10.15s 2026-02-05 04:25:12.458966 | orchestrator | redis : Copying over redis config files --------------------------------- 2.77s 2026-02-05 04:25:12.458976 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.27s 2026-02-05 04:25:12.458987 | orchestrator | redis : Copying over default config.json files -------------------------- 2.22s 2026-02-05 04:25:12.458998 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.45s 2026-02-05 04:25:12.459008 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.07s 2026-02-05 04:25:12.459019 | orchestrator | redis : include_tasks --------------------------------------------------- 0.94s 2026-02-05 04:25:12.459037 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-02-05 04:25:12.459055 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2026-02-05 04:25:12.459082 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.55s 2026-02-05 04:25:12.459104 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2026-02-05 04:25:12.735163 | orchestrator | + osism apply -a upgrade mariadb 2026-02-05 04:25:14.795623 | orchestrator | 2026-02-05 04:25:14 | INFO  | Task 40706b63-c9aa-4b17-9720-1a798e0714b2 (mariadb) was prepared for execution. 2026-02-05 04:25:14.795723 | orchestrator | 2026-02-05 04:25:14 | INFO  | It takes a moment until task 40706b63-c9aa-4b17-9720-1a798e0714b2 (mariadb) has been started and output is visible here. 2026-02-05 04:25:38.890578 | orchestrator | 2026-02-05 04:25:38.890664 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 04:25:38.890676 | orchestrator | 2026-02-05 04:25:38.890684 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 04:25:38.890692 | orchestrator | Thursday 05 February 2026 04:25:20 +0000 (0:00:01.589) 0:00:01.589 ***** 2026-02-05 04:25:38.890699 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:25:38.890707 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:25:38.890714 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:25:38.890721 | orchestrator | 2026-02-05 04:25:38.890727 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 04:25:38.890758 | orchestrator | Thursday 05 February 2026 04:25:22 +0000 (0:00:01.704) 0:00:03.294 ***** 2026-02-05 04:25:38.890766 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-05 04:25:38.890773 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-05 04:25:38.890780 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-05 04:25:38.890786 | orchestrator | 2026-02-05 04:25:38.890793 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-05 04:25:38.890800 | orchestrator | 2026-02-05 04:25:38.890807 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-05 04:25:38.890813 | orchestrator | Thursday 05 February 2026 04:25:24 +0000 (0:00:02.348) 0:00:05.642 ***** 2026-02-05 04:25:38.890820 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:25:38.890827 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 04:25:38.890834 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 04:25:38.890840 | orchestrator | 2026-02-05 04:25:38.890847 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 04:25:38.890854 | orchestrator | Thursday 05 February 2026 04:25:25 +0000 (0:00:01.500) 0:00:07.143 ***** 2026-02-05 04:25:38.890861 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:25:38.890869 | orchestrator | 2026-02-05 04:25:38.890912 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-05 04:25:38.890919 | orchestrator | Thursday 05 February 2026 04:25:27 +0000 (0:00:01.606) 0:00:08.750 ***** 2026-02-05 04:25:38.890931 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 04:25:38.890961 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 04:25:38.890975 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 04:25:38.890983 | orchestrator | 2026-02-05 04:25:38.890990 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-05 04:25:38.890997 | orchestrator | Thursday 05 February 2026 04:25:30 +0000 (0:00:03.508) 0:00:12.259 ***** 2026-02-05 04:25:38.891004 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:25:38.891011 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:25:38.891018 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:25:38.891024 | orchestrator | 2026-02-05 04:25:38.891031 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-05 04:25:38.891037 | orchestrator | Thursday 05 February 2026 04:25:32 +0000 (0:00:01.588) 0:00:13.848 ***** 2026-02-05 04:25:38.891044 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:25:38.891051 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:25:38.891057 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:25:38.891064 | orchestrator | 2026-02-05 04:25:38.891075 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-05 04:25:38.891082 | orchestrator | Thursday 05 February 2026 04:25:34 +0000 (0:00:02.178) 0:00:16.026 ***** 2026-02-05 04:25:38.891099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 04:25:51.235739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 04:25:51.235847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 04:25:51.235904 | orchestrator | 2026-02-05 04:25:51.235916 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-05 04:25:51.235925 | orchestrator | Thursday 05 February 2026 04:25:38 +0000 (0:00:04.135) 0:00:20.162 ***** 2026-02-05 04:25:51.235932 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:25:51.235940 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:25:51.235948 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:25:51.235956 | orchestrator | 2026-02-05 04:25:51.235964 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-05 04:25:51.235985 | orchestrator | Thursday 05 February 2026 04:25:41 +0000 (0:00:02.163) 0:00:22.326 ***** 2026-02-05 04:25:51.235993 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:25:51.236000 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:25:51.236007 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:25:51.236015 | orchestrator | 2026-02-05 04:25:51.236022 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 04:25:51.236029 | orchestrator | Thursday 05 February 2026 04:25:45 +0000 (0:00:04.796) 0:00:27.122 ***** 2026-02-05 04:25:51.236037 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:25:51.236044 | orchestrator | 2026-02-05 04:25:51.236052 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-05 04:25:51.236059 | orchestrator | Thursday 05 February 2026 04:25:47 +0000 (0:00:01.849) 0:00:28.971 ***** 2026-02-05 04:25:51.236067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:25:51.236082 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:25:51.236101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:25:58.755846 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:25:58.756006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:25:58.756054 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:25:58.756066 | orchestrator | 2026-02-05 04:25:58.756077 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-05 04:25:58.756088 | orchestrator | Thursday 05 February 2026 04:25:51 +0000 (0:00:03.531) 0:00:32.502 ***** 2026-02-05 04:25:58.756115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:25:58.756127 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:25:58.756157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:25:58.756176 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:25:58.756192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:25:58.756203 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:25:58.756213 | orchestrator | 2026-02-05 04:25:58.756223 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-05 04:25:58.756233 | orchestrator | Thursday 05 February 2026 04:25:54 +0000 (0:00:03.339) 0:00:35.842 ***** 2026-02-05 04:25:58.756253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:26:02.842528 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:02.842641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:26:02.842659 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:02.842670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:26:02.842698 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:02.842707 | orchestrator | 2026-02-05 04:26:02.842717 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-05 04:26:02.842727 | orchestrator | Thursday 05 February 2026 04:25:58 +0000 (0:00:04.181) 0:00:40.024 ***** 2026-02-05 04:26:02.842757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 04:26:02.842769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 04:26:02.842793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 04:26:17.236975 | orchestrator | 2026-02-05 04:26:17.237101 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-05 04:26:17.237136 | orchestrator | Thursday 05 February 2026 04:26:02 +0000 (0:00:04.079) 0:00:44.103 ***** 2026-02-05 04:26:17.237153 | orchestrator | changed: [testbed-node-0] => { 2026-02-05 04:26:17.237168 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:26:17.237182 | orchestrator | } 2026-02-05 04:26:17.237197 | orchestrator | changed: [testbed-node-1] => { 2026-02-05 04:26:17.237211 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:26:17.237225 | orchestrator | } 2026-02-05 04:26:17.237238 | orchestrator | changed: [testbed-node-2] => { 2026-02-05 04:26:17.237251 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:26:17.237277 | orchestrator | } 2026-02-05 04:26:17.237291 | orchestrator | 2026-02-05 04:26:17.237305 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-05 04:26:17.237319 | orchestrator | Thursday 05 February 2026 04:26:04 +0000 (0:00:01.335) 0:00:45.440 ***** 2026-02-05 04:26:17.237336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:26:17.237376 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:17.237416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:26:17.237433 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:17.237448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:26:17.237471 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:17.237485 | orchestrator | 2026-02-05 04:26:17.237499 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-05 04:26:17.237512 | orchestrator | Thursday 05 February 2026 04:26:07 +0000 (0:00:03.603) 0:00:49.043 ***** 2026-02-05 04:26:17.237525 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:17.237538 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:17.237551 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:17.237565 | orchestrator | 2026-02-05 04:26:17.237578 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-05 04:26:17.237591 | orchestrator | Thursday 05 February 2026 04:26:09 +0000 (0:00:01.425) 0:00:50.468 ***** 2026-02-05 04:26:17.237604 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:17.237618 | orchestrator | 2026-02-05 04:26:17.237631 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-05 04:26:17.237644 | orchestrator | Thursday 05 February 2026 04:26:10 +0000 (0:00:01.103) 0:00:51.572 ***** 2026-02-05 04:26:17.237657 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:17.237671 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:17.237683 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:17.237697 | orchestrator | 2026-02-05 04:26:17.237710 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-05 04:26:17.237723 | orchestrator | Thursday 05 February 2026 04:26:11 +0000 (0:00:01.350) 0:00:52.922 ***** 2026-02-05 04:26:17.237736 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:17.237749 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:17.237762 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:17.237775 | orchestrator | 2026-02-05 04:26:17.237788 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-05 04:26:17.237801 | orchestrator | Thursday 05 February 2026 04:26:13 +0000 (0:00:01.514) 0:00:54.437 ***** 2026-02-05 04:26:17.237815 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:17.237828 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:17.237841 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:17.237854 | orchestrator | 2026-02-05 04:26:17.237867 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-05 04:26:17.237880 | orchestrator | Thursday 05 February 2026 04:26:14 +0000 (0:00:01.397) 0:00:55.835 ***** 2026-02-05 04:26:17.238122 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:17.238137 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:17.238150 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:17.238164 | orchestrator | 2026-02-05 04:26:17.238177 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-05 04:26:17.238190 | orchestrator | Thursday 05 February 2026 04:26:15 +0000 (0:00:01.320) 0:00:57.156 ***** 2026-02-05 04:26:17.238203 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:17.238216 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:17.238239 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:17.238254 | orchestrator | 2026-02-05 04:26:17.238279 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-05 04:26:34.702343 | orchestrator | Thursday 05 February 2026 04:26:17 +0000 (0:00:01.347) 0:00:58.503 ***** 2026-02-05 04:26:34.702468 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:34.702481 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:34.702490 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:34.702498 | orchestrator | 2026-02-05 04:26:34.702507 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-05 04:26:34.702516 | orchestrator | Thursday 05 February 2026 04:26:18 +0000 (0:00:01.548) 0:01:00.052 ***** 2026-02-05 04:26:34.702524 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 04:26:34.702532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 04:26:34.702542 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 04:26:34.702550 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:34.702557 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 04:26:34.702566 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 04:26:34.702575 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 04:26:34.702583 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:34.702591 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 04:26:34.702598 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 04:26:34.702605 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 04:26:34.702616 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:34.702626 | orchestrator | 2026-02-05 04:26:34.702634 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-05 04:26:34.702643 | orchestrator | Thursday 05 February 2026 04:26:20 +0000 (0:00:01.379) 0:01:01.431 ***** 2026-02-05 04:26:34.702653 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:34.702663 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:34.702674 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:34.702686 | orchestrator | 2026-02-05 04:26:34.702694 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-05 04:26:34.702701 | orchestrator | Thursday 05 February 2026 04:26:21 +0000 (0:00:01.429) 0:01:02.860 ***** 2026-02-05 04:26:34.702708 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:34.702716 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:34.702724 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:34.702732 | orchestrator | 2026-02-05 04:26:34.702744 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-05 04:26:34.702752 | orchestrator | Thursday 05 February 2026 04:26:22 +0000 (0:00:01.326) 0:01:04.187 ***** 2026-02-05 04:26:34.702765 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:34.702773 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:34.702780 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:34.702790 | orchestrator | 2026-02-05 04:26:34.702799 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-05 04:26:34.702810 | orchestrator | Thursday 05 February 2026 04:26:24 +0000 (0:00:01.373) 0:01:05.560 ***** 2026-02-05 04:26:34.702818 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:34.702826 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:34.702835 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:34.702848 | orchestrator | 2026-02-05 04:26:34.702856 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-05 04:26:34.702863 | orchestrator | Thursday 05 February 2026 04:26:25 +0000 (0:00:01.387) 0:01:06.948 ***** 2026-02-05 04:26:34.702874 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:34.702881 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:34.702913 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:34.702946 | orchestrator | 2026-02-05 04:26:34.702954 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-05 04:26:34.702962 | orchestrator | Thursday 05 February 2026 04:26:27 +0000 (0:00:01.334) 0:01:08.283 ***** 2026-02-05 04:26:34.702970 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:34.702979 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:34.702988 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:34.702996 | orchestrator | 2026-02-05 04:26:34.703003 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-05 04:26:34.703011 | orchestrator | Thursday 05 February 2026 04:26:28 +0000 (0:00:01.507) 0:01:09.791 ***** 2026-02-05 04:26:34.703018 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:34.703026 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:34.703033 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:34.703041 | orchestrator | 2026-02-05 04:26:34.703049 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-05 04:26:34.703059 | orchestrator | Thursday 05 February 2026 04:26:29 +0000 (0:00:01.344) 0:01:11.136 ***** 2026-02-05 04:26:34.703069 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:34.703079 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:34.703091 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:34.703102 | orchestrator | 2026-02-05 04:26:34.703109 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-05 04:26:34.703118 | orchestrator | Thursday 05 February 2026 04:26:31 +0000 (0:00:01.396) 0:01:12.532 ***** 2026-02-05 04:26:34.703158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:26:34.703171 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:34.703180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:26:34.703196 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:34.703215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:26:51.283938 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:51.284036 | orchestrator | 2026-02-05 04:26:51.284048 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-05 04:26:51.284058 | orchestrator | Thursday 05 February 2026 04:26:34 +0000 (0:00:03.432) 0:01:15.965 ***** 2026-02-05 04:26:51.284064 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:51.284071 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:51.284084 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:51.284091 | orchestrator | 2026-02-05 04:26:51.284103 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-05 04:26:51.284110 | orchestrator | Thursday 05 February 2026 04:26:36 +0000 (0:00:01.615) 0:01:17.581 ***** 2026-02-05 04:26:51.284120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:26:51.284153 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:51.284188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:26:51.284196 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:51.284202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 04:26:51.284213 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:51.284219 | orchestrator | 2026-02-05 04:26:51.284225 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-05 04:26:51.284231 | orchestrator | Thursday 05 February 2026 04:26:39 +0000 (0:00:03.260) 0:01:20.841 ***** 2026-02-05 04:26:51.284238 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:51.284244 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:51.284250 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:51.284256 | orchestrator | 2026-02-05 04:26:51.284262 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-05 04:26:51.284268 | orchestrator | Thursday 05 February 2026 04:26:41 +0000 (0:00:01.705) 0:01:22.547 ***** 2026-02-05 04:26:51.284275 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:51.284281 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:51.284287 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:51.284293 | orchestrator | 2026-02-05 04:26:51.284299 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-05 04:26:51.284306 | orchestrator | Thursday 05 February 2026 04:26:42 +0000 (0:00:01.316) 0:01:23.863 ***** 2026-02-05 04:26:51.284312 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:51.284319 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:51.284325 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:51.284331 | orchestrator | 2026-02-05 04:26:51.284337 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-05 04:26:51.284343 | orchestrator | Thursday 05 February 2026 04:26:44 +0000 (0:00:01.509) 0:01:25.373 ***** 2026-02-05 04:26:51.284349 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:51.284355 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:51.284361 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:51.284367 | orchestrator | 2026-02-05 04:26:51.284373 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-05 04:26:51.284379 | orchestrator | Thursday 05 February 2026 04:26:45 +0000 (0:00:01.724) 0:01:27.097 ***** 2026-02-05 04:26:51.284385 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:26:51.284394 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:26:51.284400 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:26:51.284407 | orchestrator | 2026-02-05 04:26:51.284413 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-05 04:26:51.284419 | orchestrator | Thursday 05 February 2026 04:26:47 +0000 (0:00:01.898) 0:01:28.995 ***** 2026-02-05 04:26:51.284430 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:26:51.284438 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:26:51.284444 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:26:51.284450 | orchestrator | 2026-02-05 04:26:51.284456 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-05 04:26:51.284462 | orchestrator | Thursday 05 February 2026 04:26:49 +0000 (0:00:01.902) 0:01:30.897 ***** 2026-02-05 04:26:51.284469 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:26:51.284475 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:26:51.284482 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:26:51.284488 | orchestrator | 2026-02-05 04:26:51.284495 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-05 04:26:51.284501 | orchestrator | Thursday 05 February 2026 04:26:51 +0000 (0:00:01.447) 0:01:32.345 ***** 2026-02-05 04:26:51.284513 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:29:27.874959 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:29:27.875058 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:29:27.875071 | orchestrator | 2026-02-05 04:29:27.875082 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-05 04:29:27.875093 | orchestrator | Thursday 05 February 2026 04:26:52 +0000 (0:00:01.427) 0:01:33.772 ***** 2026-02-05 04:29:27.875102 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:29:27.875111 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:29:27.875119 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:29:27.875124 | orchestrator | 2026-02-05 04:29:27.875129 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-05 04:29:27.875136 | orchestrator | Thursday 05 February 2026 04:26:54 +0000 (0:00:01.984) 0:01:35.757 ***** 2026-02-05 04:29:27.875141 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:29:27.875146 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:29:27.875152 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:29:27.875157 | orchestrator | 2026-02-05 04:29:27.875162 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-05 04:29:27.875167 | orchestrator | Thursday 05 February 2026 04:26:55 +0000 (0:00:01.480) 0:01:37.237 ***** 2026-02-05 04:29:27.875173 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:29:27.875179 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:29:27.875184 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:29:27.875189 | orchestrator | 2026-02-05 04:29:27.875194 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-05 04:29:27.875199 | orchestrator | Thursday 05 February 2026 04:26:57 +0000 (0:00:01.362) 0:01:38.600 ***** 2026-02-05 04:29:27.875204 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:29:27.875210 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:29:27.875215 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:29:27.875220 | orchestrator | 2026-02-05 04:29:27.875225 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-05 04:29:27.875230 | orchestrator | Thursday 05 February 2026 04:27:01 +0000 (0:00:03.705) 0:01:42.305 ***** 2026-02-05 04:29:27.875235 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:29:27.875240 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:29:27.875245 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:29:27.875250 | orchestrator | 2026-02-05 04:29:27.875255 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-05 04:29:27.875260 | orchestrator | Thursday 05 February 2026 04:27:02 +0000 (0:00:01.368) 0:01:43.675 ***** 2026-02-05 04:29:27.875265 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:29:27.875270 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:29:27.875275 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:29:27.875280 | orchestrator | 2026-02-05 04:29:27.875286 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-05 04:29:27.875292 | orchestrator | Thursday 05 February 2026 04:27:03 +0000 (0:00:01.361) 0:01:45.036 ***** 2026-02-05 04:29:27.875297 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:29:27.875302 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:29:27.875328 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:29:27.875334 | orchestrator | 2026-02-05 04:29:27.875339 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 04:29:27.875345 | orchestrator | Thursday 05 February 2026 04:27:05 +0000 (0:00:01.703) 0:01:46.740 ***** 2026-02-05 04:29:27.875354 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:29:27.875362 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:29:27.875371 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:29:27.875378 | orchestrator | 2026-02-05 04:29:27.875386 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 04:29:27.875393 | orchestrator | Thursday 05 February 2026 04:27:06 +0000 (0:00:01.520) 0:01:48.261 ***** 2026-02-05 04:29:27.875401 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:29:27.875410 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:29:27.875417 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:29:27.875425 | orchestrator | 2026-02-05 04:29:27.875433 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-05 04:29:27.875440 | orchestrator | Thursday 05 February 2026 04:27:08 +0000 (0:00:01.490) 0:01:49.751 ***** 2026-02-05 04:29:27.875447 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:29:27.875455 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:29:27.875462 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:29:27.875470 | orchestrator | 2026-02-05 04:29:27.875478 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-05 04:29:27.875487 | orchestrator | Thursday 05 February 2026 04:27:10 +0000 (0:00:01.570) 0:01:51.322 ***** 2026-02-05 04:29:27.875495 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:29:27.875504 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:29:27.875512 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:29:27.875519 | orchestrator | 2026-02-05 04:29:27.875527 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-05 04:29:27.875536 | orchestrator | 2026-02-05 04:29:27.875544 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-05 04:29:27.875552 | orchestrator | Thursday 05 February 2026 04:27:11 +0000 (0:00:01.570) 0:01:52.892 ***** 2026-02-05 04:29:27.875561 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:29:27.875569 | orchestrator | 2026-02-05 04:29:27.875592 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-05 04:29:27.875601 | orchestrator | Thursday 05 February 2026 04:27:36 +0000 (0:00:25.240) 0:02:18.133 ***** 2026-02-05 04:29:27.875610 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:29:27.875619 | orchestrator | 2026-02-05 04:29:27.875628 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-05 04:29:27.875636 | orchestrator | Thursday 05 February 2026 04:27:42 +0000 (0:00:05.631) 0:02:23.765 ***** 2026-02-05 04:29:27.875645 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:29:27.875654 | orchestrator | 2026-02-05 04:29:27.875662 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-05 04:29:27.875671 | orchestrator | 2026-02-05 04:29:27.875679 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-05 04:29:27.875688 | orchestrator | Thursday 05 February 2026 04:27:45 +0000 (0:00:03.254) 0:02:27.020 ***** 2026-02-05 04:29:27.875696 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:29:27.875702 | orchestrator | 2026-02-05 04:29:27.875708 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-05 04:29:27.875731 | orchestrator | Thursday 05 February 2026 04:28:11 +0000 (0:00:25.296) 0:02:52.316 ***** 2026-02-05 04:29:27.875736 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-02-05 04:29:27.875744 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:29:27.875752 | orchestrator | 2026-02-05 04:29:27.875760 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-05 04:29:27.875769 | orchestrator | Thursday 05 February 2026 04:28:19 +0000 (0:00:07.970) 0:03:00.286 ***** 2026-02-05 04:29:27.875788 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:29:27.875797 | orchestrator | 2026-02-05 04:29:27.875806 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-05 04:29:27.875814 | orchestrator | 2026-02-05 04:29:27.875822 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-05 04:29:27.875830 | orchestrator | Thursday 05 February 2026 04:28:21 +0000 (0:00:02.992) 0:03:03.279 ***** 2026-02-05 04:29:27.875838 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:29:27.875848 | orchestrator | 2026-02-05 04:29:27.875856 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-05 04:29:27.875865 | orchestrator | Thursday 05 February 2026 04:28:45 +0000 (0:00:23.994) 0:03:27.274 ***** 2026-02-05 04:29:27.875893 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-02-05 04:29:27.875901 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:29:27.875910 | orchestrator | 2026-02-05 04:29:27.875918 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-05 04:29:27.875926 | orchestrator | Thursday 05 February 2026 04:28:54 +0000 (0:00:08.127) 0:03:35.401 ***** 2026-02-05 04:29:27.875934 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-05 04:29:27.875943 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-05 04:29:27.875952 | orchestrator | mariadb_bootstrap_restart 2026-02-05 04:29:27.875959 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:29:27.875968 | orchestrator | 2026-02-05 04:29:27.875977 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-05 04:29:27.875985 | orchestrator | skipping: no hosts matched 2026-02-05 04:29:27.875994 | orchestrator | 2026-02-05 04:29:27.876002 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-05 04:29:27.876011 | orchestrator | skipping: no hosts matched 2026-02-05 04:29:27.876019 | orchestrator | 2026-02-05 04:29:27.876027 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-05 04:29:27.876034 | orchestrator | 2026-02-05 04:29:27.876043 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-05 04:29:27.876051 | orchestrator | Thursday 05 February 2026 04:28:58 +0000 (0:00:04.141) 0:03:39.543 ***** 2026-02-05 04:29:27.876059 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:29:27.876068 | orchestrator | 2026-02-05 04:29:27.876077 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-05 04:29:27.876085 | orchestrator | Thursday 05 February 2026 04:29:00 +0000 (0:00:01.972) 0:03:41.515 ***** 2026-02-05 04:29:27.876094 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:29:27.876102 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:29:27.876111 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:29:27.876119 | orchestrator | 2026-02-05 04:29:27.876127 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-05 04:29:27.876135 | orchestrator | Thursday 05 February 2026 04:29:03 +0000 (0:00:03.435) 0:03:44.951 ***** 2026-02-05 04:29:27.876142 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:29:27.876149 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:29:27.876157 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:29:27.876164 | orchestrator | 2026-02-05 04:29:27.876172 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-05 04:29:27.876179 | orchestrator | Thursday 05 February 2026 04:29:07 +0000 (0:00:03.444) 0:03:48.396 ***** 2026-02-05 04:29:27.876187 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:29:27.876195 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:29:27.876203 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:29:27.876211 | orchestrator | 2026-02-05 04:29:27.876219 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-05 04:29:27.876227 | orchestrator | Thursday 05 February 2026 04:29:10 +0000 (0:00:03.241) 0:03:51.638 ***** 2026-02-05 04:29:27.876243 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:29:27.876251 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:29:27.876259 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:29:27.876267 | orchestrator | 2026-02-05 04:29:27.876274 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-05 04:29:27.876282 | orchestrator | Thursday 05 February 2026 04:29:13 +0000 (0:00:03.490) 0:03:55.129 ***** 2026-02-05 04:29:27.876291 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:29:27.876299 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:29:27.876307 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:29:27.876315 | orchestrator | 2026-02-05 04:29:27.876331 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-05 04:29:27.876339 | orchestrator | Thursday 05 February 2026 04:29:20 +0000 (0:00:06.245) 0:04:01.375 ***** 2026-02-05 04:29:27.876347 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:29:27.876356 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:29:27.876364 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:29:27.876373 | orchestrator | 2026-02-05 04:29:27.876382 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-05 04:29:27.876390 | orchestrator | Thursday 05 February 2026 04:29:23 +0000 (0:00:02.943) 0:04:04.319 ***** 2026-02-05 04:29:27.876398 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:29:27.876407 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:29:27.876415 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:29:27.876422 | orchestrator | 2026-02-05 04:29:27.876431 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-05 04:29:27.876437 | orchestrator | Thursday 05 February 2026 04:29:24 +0000 (0:00:01.346) 0:04:05.665 ***** 2026-02-05 04:29:27.876442 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:29:27.876447 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:29:27.876452 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:29:27.876457 | orchestrator | 2026-02-05 04:29:27.876473 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-05 04:29:47.928838 | orchestrator | Thursday 05 February 2026 04:29:27 +0000 (0:00:03.476) 0:04:09.142 ***** 2026-02-05 04:29:47.928943 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:29:47.928953 | orchestrator | 2026-02-05 04:29:47.928968 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-05 04:29:47.928975 | orchestrator | Thursday 05 February 2026 04:29:29 +0000 (0:00:01.675) 0:04:10.817 ***** 2026-02-05 04:29:47.928987 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:29:47.928995 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:29:47.929001 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:29:47.929007 | orchestrator | 2026-02-05 04:29:47.929014 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 04:29:47.929023 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-05 04:29:47.929031 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-05 04:29:47.929038 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-05 04:29:47.929044 | orchestrator | 2026-02-05 04:29:47.929050 | orchestrator | 2026-02-05 04:29:47.929056 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 04:29:47.929062 | orchestrator | Thursday 05 February 2026 04:29:47 +0000 (0:00:17.985) 0:04:28.803 ***** 2026-02-05 04:29:47.929069 | orchestrator | =============================================================================== 2026-02-05 04:29:47.929075 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 74.53s 2026-02-05 04:29:47.929081 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 21.73s 2026-02-05 04:29:47.929114 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.99s 2026-02-05 04:29:47.929121 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.39s 2026-02-05 04:29:47.929127 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.25s 2026-02-05 04:29:47.929133 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.80s 2026-02-05 04:29:47.929139 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.18s 2026-02-05 04:29:47.929145 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.14s 2026-02-05 04:29:47.929151 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.08s 2026-02-05 04:29:47.929156 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.71s 2026-02-05 04:29:47.929162 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.60s 2026-02-05 04:29:47.929169 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.53s 2026-02-05 04:29:47.929175 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.51s 2026-02-05 04:29:47.929182 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.49s 2026-02-05 04:29:47.929188 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.48s 2026-02-05 04:29:47.929195 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.44s 2026-02-05 04:29:47.929202 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.44s 2026-02-05 04:29:47.929209 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.43s 2026-02-05 04:29:47.929216 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.34s 2026-02-05 04:29:47.929224 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.26s 2026-02-05 04:29:48.231169 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-05 04:29:50.268846 | orchestrator | 2026-02-05 04:29:50 | INFO  | Task c787b87c-0277-4283-8498-f1dbfe09ef40 (rabbitmq) was prepared for execution. 2026-02-05 04:29:50.268956 | orchestrator | 2026-02-05 04:29:50 | INFO  | It takes a moment until task c787b87c-0277-4283-8498-f1dbfe09ef40 (rabbitmq) has been started and output is visible here. 2026-02-05 04:30:35.409760 | orchestrator | 2026-02-05 04:30:35.409924 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 04:30:35.409942 | orchestrator | 2026-02-05 04:30:35.409949 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 04:30:35.409955 | orchestrator | Thursday 05 February 2026 04:29:56 +0000 (0:00:01.803) 0:00:01.803 ***** 2026-02-05 04:30:35.409962 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:30:35.409970 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:30:35.409976 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:30:35.409982 | orchestrator | 2026-02-05 04:30:35.409988 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 04:30:35.409994 | orchestrator | Thursday 05 February 2026 04:29:57 +0000 (0:00:01.772) 0:00:03.576 ***** 2026-02-05 04:30:35.410000 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-05 04:30:35.410008 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-05 04:30:35.410063 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-05 04:30:35.410070 | orchestrator | 2026-02-05 04:30:35.410076 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-05 04:30:35.410083 | orchestrator | 2026-02-05 04:30:35.410090 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-05 04:30:35.410096 | orchestrator | Thursday 05 February 2026 04:30:00 +0000 (0:00:02.244) 0:00:05.820 ***** 2026-02-05 04:30:35.410103 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:30:35.410110 | orchestrator | 2026-02-05 04:30:35.410115 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-05 04:30:35.410143 | orchestrator | Thursday 05 February 2026 04:30:02 +0000 (0:00:02.831) 0:00:08.652 ***** 2026-02-05 04:30:35.410149 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:30:35.410155 | orchestrator | 2026-02-05 04:30:35.410160 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-05 04:30:35.410166 | orchestrator | Thursday 05 February 2026 04:30:05 +0000 (0:00:02.435) 0:00:11.087 ***** 2026-02-05 04:30:35.410171 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:30:35.410177 | orchestrator | 2026-02-05 04:30:35.410182 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-05 04:30:35.410188 | orchestrator | Thursday 05 February 2026 04:30:08 +0000 (0:00:03.316) 0:00:14.404 ***** 2026-02-05 04:30:35.410194 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:30:35.410201 | orchestrator | 2026-02-05 04:30:35.410206 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-05 04:30:35.410213 | orchestrator | Thursday 05 February 2026 04:30:19 +0000 (0:00:10.356) 0:00:24.760 ***** 2026-02-05 04:30:35.410218 | orchestrator | ok: [testbed-node-0] => { 2026-02-05 04:30:35.410224 | orchestrator |  "changed": false, 2026-02-05 04:30:35.410229 | orchestrator |  "msg": "All assertions passed" 2026-02-05 04:30:35.410235 | orchestrator | } 2026-02-05 04:30:35.410241 | orchestrator | 2026-02-05 04:30:35.410247 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-05 04:30:35.410252 | orchestrator | Thursday 05 February 2026 04:30:20 +0000 (0:00:01.334) 0:00:26.095 ***** 2026-02-05 04:30:35.410259 | orchestrator | ok: [testbed-node-0] => { 2026-02-05 04:30:35.410264 | orchestrator |  "changed": false, 2026-02-05 04:30:35.410270 | orchestrator |  "msg": "All assertions passed" 2026-02-05 04:30:35.410276 | orchestrator | } 2026-02-05 04:30:35.410282 | orchestrator | 2026-02-05 04:30:35.410288 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-05 04:30:35.410293 | orchestrator | Thursday 05 February 2026 04:30:22 +0000 (0:00:01.689) 0:00:27.785 ***** 2026-02-05 04:30:35.410300 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:30:35.410306 | orchestrator | 2026-02-05 04:30:35.410311 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-05 04:30:35.410317 | orchestrator | Thursday 05 February 2026 04:30:23 +0000 (0:00:01.665) 0:00:29.451 ***** 2026-02-05 04:30:35.410323 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:30:35.410329 | orchestrator | 2026-02-05 04:30:35.410335 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-05 04:30:35.410341 | orchestrator | Thursday 05 February 2026 04:30:26 +0000 (0:00:02.376) 0:00:31.828 ***** 2026-02-05 04:30:35.410347 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:30:35.410352 | orchestrator | 2026-02-05 04:30:35.410358 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-05 04:30:35.410364 | orchestrator | Thursday 05 February 2026 04:30:29 +0000 (0:00:03.203) 0:00:35.032 ***** 2026-02-05 04:30:35.410370 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:30:35.410375 | orchestrator | 2026-02-05 04:30:35.410381 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-05 04:30:35.410387 | orchestrator | Thursday 05 February 2026 04:30:31 +0000 (0:00:01.829) 0:00:36.861 ***** 2026-02-05 04:30:35.410424 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:30:35.410440 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:30:35.410448 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:30:35.410455 | orchestrator | 2026-02-05 04:30:35.410461 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-05 04:30:35.410467 | orchestrator | Thursday 05 February 2026 04:30:32 +0000 (0:00:01.783) 0:00:38.645 ***** 2026-02-05 04:30:35.410474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:30:35.410493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:30:54.527180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:30:54.527325 | orchestrator | 2026-02-05 04:30:54.527344 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-05 04:30:54.527358 | orchestrator | Thursday 05 February 2026 04:30:35 +0000 (0:00:02.496) 0:00:41.141 ***** 2026-02-05 04:30:54.527369 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-05 04:30:54.527381 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-05 04:30:54.527392 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-05 04:30:54.527402 | orchestrator | 2026-02-05 04:30:54.527413 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-05 04:30:54.527424 | orchestrator | Thursday 05 February 2026 04:30:37 +0000 (0:00:02.379) 0:00:43.521 ***** 2026-02-05 04:30:54.527435 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-05 04:30:54.527446 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-05 04:30:54.527457 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-05 04:30:54.527468 | orchestrator | 2026-02-05 04:30:54.527479 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-05 04:30:54.527490 | orchestrator | Thursday 05 February 2026 04:30:40 +0000 (0:00:02.982) 0:00:46.504 ***** 2026-02-05 04:30:54.527500 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-05 04:30:54.527511 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-05 04:30:54.527522 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-05 04:30:54.527532 | orchestrator | 2026-02-05 04:30:54.527543 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-05 04:30:54.527582 | orchestrator | Thursday 05 February 2026 04:30:43 +0000 (0:00:02.272) 0:00:48.776 ***** 2026-02-05 04:30:54.527594 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-05 04:30:54.527605 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-05 04:30:54.527616 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-05 04:30:54.527627 | orchestrator | 2026-02-05 04:30:54.527638 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-05 04:30:54.527648 | orchestrator | Thursday 05 February 2026 04:30:45 +0000 (0:00:02.393) 0:00:51.170 ***** 2026-02-05 04:30:54.527659 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-05 04:30:54.527670 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-05 04:30:54.527681 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-05 04:30:54.527691 | orchestrator | 2026-02-05 04:30:54.527703 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-05 04:30:54.527717 | orchestrator | Thursday 05 February 2026 04:30:47 +0000 (0:00:02.280) 0:00:53.450 ***** 2026-02-05 04:30:54.527744 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-05 04:30:54.527757 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-05 04:30:54.527770 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-05 04:30:54.527783 | orchestrator | 2026-02-05 04:30:54.527795 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-05 04:30:54.527807 | orchestrator | Thursday 05 February 2026 04:30:50 +0000 (0:00:02.516) 0:00:55.966 ***** 2026-02-05 04:30:54.527821 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:30:54.527833 | orchestrator | 2026-02-05 04:30:54.527864 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-05 04:30:54.527901 | orchestrator | Thursday 05 February 2026 04:30:51 +0000 (0:00:01.686) 0:00:57.653 ***** 2026-02-05 04:30:54.527914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:30:54.527928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:30:54.527949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:30:54.527961 | orchestrator | 2026-02-05 04:30:54.527972 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-05 04:30:54.527989 | orchestrator | Thursday 05 February 2026 04:30:54 +0000 (0:00:02.482) 0:01:00.135 ***** 2026-02-05 04:30:54.528011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 04:31:04.182059 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:31:04.182149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 04:31:04.182181 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:31:04.182187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 04:31:04.182193 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:31:04.182197 | orchestrator | 2026-02-05 04:31:04.182203 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-05 04:31:04.182208 | orchestrator | Thursday 05 February 2026 04:30:55 +0000 (0:00:01.429) 0:01:01.565 ***** 2026-02-05 04:31:04.182222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 04:31:04.182227 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:31:04.182243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 04:31:04.182248 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:31:04.182252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 04:31:04.182260 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:31:04.182265 | orchestrator | 2026-02-05 04:31:04.182269 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-05 04:31:04.182273 | orchestrator | Thursday 05 February 2026 04:30:57 +0000 (0:00:01.884) 0:01:03.450 ***** 2026-02-05 04:31:04.182278 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:31:04.182283 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:31:04.182287 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:31:04.182291 | orchestrator | 2026-02-05 04:31:04.182295 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-05 04:31:04.182299 | orchestrator | Thursday 05 February 2026 04:31:01 +0000 (0:00:04.031) 0:01:07.481 ***** 2026-02-05 04:31:04.182304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:31:04.182314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:32:53.650270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 04:32:53.650367 | orchestrator | 2026-02-05 04:32:53.650377 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-05 04:32:53.650384 | orchestrator | Thursday 05 February 2026 04:31:04 +0000 (0:00:02.434) 0:01:09.916 ***** 2026-02-05 04:32:53.650391 | orchestrator | changed: [testbed-node-0] => { 2026-02-05 04:32:53.650397 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:32:53.650402 | orchestrator | } 2026-02-05 04:32:53.650407 | orchestrator | changed: [testbed-node-1] => { 2026-02-05 04:32:53.650413 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:32:53.650418 | orchestrator | } 2026-02-05 04:32:53.650423 | orchestrator | changed: [testbed-node-2] => { 2026-02-05 04:32:53.650428 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:32:53.650433 | orchestrator | } 2026-02-05 04:32:53.650438 | orchestrator | 2026-02-05 04:32:53.650443 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-05 04:32:53.650449 | orchestrator | Thursday 05 February 2026 04:31:05 +0000 (0:00:01.358) 0:01:11.275 ***** 2026-02-05 04:32:53.650455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 04:32:53.650461 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:32:53.650470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 04:32:53.650480 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:32:53.650498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 04:32:53.650504 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:32:53.650510 | orchestrator | 2026-02-05 04:32:53.650515 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-05 04:32:53.650520 | orchestrator | Thursday 05 February 2026 04:31:07 +0000 (0:00:01.928) 0:01:13.204 ***** 2026-02-05 04:32:53.650526 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:32:53.650531 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:32:53.650536 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:32:53.650541 | orchestrator | 2026-02-05 04:32:53.650546 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-05 04:32:53.650551 | orchestrator | 2026-02-05 04:32:53.650557 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-05 04:32:53.650562 | orchestrator | Thursday 05 February 2026 04:31:09 +0000 (0:00:02.160) 0:01:15.365 ***** 2026-02-05 04:32:53.650567 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:32:53.650574 | orchestrator | 2026-02-05 04:32:53.650579 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-05 04:32:53.650584 | orchestrator | Thursday 05 February 2026 04:31:11 +0000 (0:00:02.263) 0:01:17.628 ***** 2026-02-05 04:32:53.650589 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:32:53.650594 | orchestrator | 2026-02-05 04:32:53.650600 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-05 04:32:53.650605 | orchestrator | Thursday 05 February 2026 04:31:22 +0000 (0:00:10.341) 0:01:27.969 ***** 2026-02-05 04:32:53.650610 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:32:53.650615 | orchestrator | 2026-02-05 04:32:53.650620 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-05 04:32:53.650626 | orchestrator | Thursday 05 February 2026 04:31:31 +0000 (0:00:09.290) 0:01:37.259 ***** 2026-02-05 04:32:53.650631 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:32:53.650636 | orchestrator | 2026-02-05 04:32:53.650641 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-05 04:32:53.650646 | orchestrator | 2026-02-05 04:32:53.650651 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-05 04:32:53.650656 | orchestrator | Thursday 05 February 2026 04:31:43 +0000 (0:00:12.070) 0:01:49.330 ***** 2026-02-05 04:32:53.650662 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:32:53.650667 | orchestrator | 2026-02-05 04:32:53.650672 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-05 04:32:53.650677 | orchestrator | Thursday 05 February 2026 04:31:45 +0000 (0:00:01.763) 0:01:51.094 ***** 2026-02-05 04:32:53.650682 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:32:53.650687 | orchestrator | 2026-02-05 04:32:53.650693 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-05 04:32:53.650702 | orchestrator | Thursday 05 February 2026 04:31:54 +0000 (0:00:09.320) 0:02:00.414 ***** 2026-02-05 04:32:53.650707 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:32:53.650712 | orchestrator | 2026-02-05 04:32:53.650720 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-05 04:32:53.650725 | orchestrator | Thursday 05 February 2026 04:32:08 +0000 (0:00:13.529) 0:02:13.944 ***** 2026-02-05 04:32:53.650731 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:32:53.650736 | orchestrator | 2026-02-05 04:32:53.650742 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-05 04:32:53.650747 | orchestrator | 2026-02-05 04:32:53.650752 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-05 04:32:53.650757 | orchestrator | Thursday 05 February 2026 04:32:18 +0000 (0:00:09.950) 0:02:23.894 ***** 2026-02-05 04:32:53.650763 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:32:53.650769 | orchestrator | 2026-02-05 04:32:53.650775 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-05 04:32:53.650781 | orchestrator | Thursday 05 February 2026 04:32:19 +0000 (0:00:01.678) 0:02:25.573 ***** 2026-02-05 04:32:53.650787 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:32:53.650793 | orchestrator | 2026-02-05 04:32:53.650799 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-05 04:32:53.650805 | orchestrator | Thursday 05 February 2026 04:32:29 +0000 (0:00:09.537) 0:02:35.111 ***** 2026-02-05 04:32:53.650811 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:32:53.650817 | orchestrator | 2026-02-05 04:32:53.650823 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-05 04:32:53.650829 | orchestrator | Thursday 05 February 2026 04:32:42 +0000 (0:00:13.539) 0:02:48.650 ***** 2026-02-05 04:32:53.650835 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:32:53.650841 | orchestrator | 2026-02-05 04:32:53.650847 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-05 04:32:53.650853 | orchestrator | 2026-02-05 04:32:53.650859 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-05 04:32:53.650869 | orchestrator | Thursday 05 February 2026 04:32:53 +0000 (0:00:10.726) 0:02:59.377 ***** 2026-02-05 04:33:00.163398 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:33:00.163518 | orchestrator | 2026-02-05 04:33:00.163534 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-05 04:33:00.163545 | orchestrator | Thursday 05 February 2026 04:32:54 +0000 (0:00:01.318) 0:03:00.695 ***** 2026-02-05 04:33:00.163556 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:33:00.163567 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:33:00.163577 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:33:00.163586 | orchestrator | 2026-02-05 04:33:00.163596 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 04:33:00.163607 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 04:33:00.163618 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 04:33:00.163628 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 04:33:00.163638 | orchestrator | 2026-02-05 04:33:00.163647 | orchestrator | 2026-02-05 04:33:00.163657 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 04:33:00.163667 | orchestrator | Thursday 05 February 2026 04:32:59 +0000 (0:00:04.879) 0:03:05.575 ***** 2026-02-05 04:33:00.163676 | orchestrator | =============================================================================== 2026-02-05 04:33:00.163686 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 36.36s 2026-02-05 04:33:00.163723 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 32.75s 2026-02-05 04:33:00.163733 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 29.20s 2026-02-05 04:33:00.163743 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------ 10.36s 2026-02-05 04:33:00.163753 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.71s 2026-02-05 04:33:00.163762 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.88s 2026-02-05 04:33:00.163772 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.03s 2026-02-05 04:33:00.163781 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.32s 2026-02-05 04:33:00.163791 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.20s 2026-02-05 04:33:00.163800 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.98s 2026-02-05 04:33:00.163810 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.83s 2026-02-05 04:33:00.163820 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.52s 2026-02-05 04:33:00.163829 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.50s 2026-02-05 04:33:00.163839 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.48s 2026-02-05 04:33:00.163849 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.44s 2026-02-05 04:33:00.163858 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.43s 2026-02-05 04:33:00.163868 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.39s 2026-02-05 04:33:00.163877 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.38s 2026-02-05 04:33:00.163887 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.38s 2026-02-05 04:33:00.163896 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.28s 2026-02-05 04:33:00.461238 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-05 04:33:02.505409 | orchestrator | 2026-02-05 04:33:02 | INFO  | Task 19c8fb0a-e866-4309-b619-1b22712072d4 (openvswitch) was prepared for execution. 2026-02-05 04:33:02.505576 | orchestrator | 2026-02-05 04:33:02 | INFO  | It takes a moment until task 19c8fb0a-e866-4309-b619-1b22712072d4 (openvswitch) has been started and output is visible here. 2026-02-05 04:33:27.375590 | orchestrator | 2026-02-05 04:33:27.375720 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 04:33:27.375736 | orchestrator | 2026-02-05 04:33:27.375747 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 04:33:27.375758 | orchestrator | Thursday 05 February 2026 04:33:08 +0000 (0:00:01.669) 0:00:01.669 ***** 2026-02-05 04:33:27.375768 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:33:27.375779 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:33:27.375788 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:33:27.375798 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:33:27.375807 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:33:27.375817 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:33:27.375826 | orchestrator | 2026-02-05 04:33:27.375836 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 04:33:27.375846 | orchestrator | Thursday 05 February 2026 04:33:10 +0000 (0:00:02.554) 0:00:04.223 ***** 2026-02-05 04:33:27.375856 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 04:33:27.375867 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 04:33:27.375876 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 04:33:27.375886 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 04:33:27.375896 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 04:33:27.375972 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 04:33:27.375985 | orchestrator | 2026-02-05 04:33:27.375995 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-05 04:33:27.376004 | orchestrator | 2026-02-05 04:33:27.376014 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-05 04:33:27.376024 | orchestrator | Thursday 05 February 2026 04:33:13 +0000 (0:00:02.517) 0:00:06.741 ***** 2026-02-05 04:33:27.376035 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 04:33:27.376046 | orchestrator | 2026-02-05 04:33:27.376056 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-05 04:33:27.376066 | orchestrator | Thursday 05 February 2026 04:33:15 +0000 (0:00:02.607) 0:00:09.349 ***** 2026-02-05 04:33:27.376075 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-05 04:33:27.376086 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-05 04:33:27.376095 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-05 04:33:27.376105 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-05 04:33:27.376115 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-05 04:33:27.376124 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-05 04:33:27.376135 | orchestrator | 2026-02-05 04:33:27.376146 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-05 04:33:27.376158 | orchestrator | Thursday 05 February 2026 04:33:17 +0000 (0:00:01.715) 0:00:11.065 ***** 2026-02-05 04:33:27.376169 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-05 04:33:27.376180 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-05 04:33:27.376191 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-05 04:33:27.376202 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-05 04:33:27.376213 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-05 04:33:27.376224 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-05 04:33:27.376234 | orchestrator | 2026-02-05 04:33:27.376246 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-05 04:33:27.376257 | orchestrator | Thursday 05 February 2026 04:33:20 +0000 (0:00:02.606) 0:00:13.671 ***** 2026-02-05 04:33:27.376268 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-05 04:33:27.376279 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:33:27.376290 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-05 04:33:27.376302 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:33:27.376312 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-05 04:33:27.376324 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:33:27.376335 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-05 04:33:27.376346 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:33:27.376357 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-05 04:33:27.376368 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:33:27.376378 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-05 04:33:27.376389 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:33:27.376400 | orchestrator | 2026-02-05 04:33:27.376412 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-05 04:33:27.376423 | orchestrator | Thursday 05 February 2026 04:33:22 +0000 (0:00:02.311) 0:00:15.983 ***** 2026-02-05 04:33:27.376434 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:33:27.376445 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:33:27.376456 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:33:27.376467 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:33:27.376479 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:33:27.376490 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:33:27.376507 | orchestrator | 2026-02-05 04:33:27.376517 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-05 04:33:27.376542 | orchestrator | Thursday 05 February 2026 04:33:24 +0000 (0:00:02.185) 0:00:18.169 ***** 2026-02-05 04:33:27.376572 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:27.376589 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:27.376600 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:27.376610 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:27.376620 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:27.376635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:27.376659 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:29.647901 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:29.648094 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:29.648115 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:29.648130 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:29.648188 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:29.648202 | orchestrator | 2026-02-05 04:33:29.648216 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-05 04:33:29.648230 | orchestrator | Thursday 05 February 2026 04:33:27 +0000 (0:00:02.534) 0:00:20.704 ***** 2026-02-05 04:33:29.648264 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:29.648279 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:29.648291 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:29.648304 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:29.648342 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:29.648356 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:29.648377 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:35.498350 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:35.498428 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:35.498435 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:35.498465 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:35.498470 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:35.498474 | orchestrator | 2026-02-05 04:33:35.498479 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-05 04:33:35.498485 | orchestrator | Thursday 05 February 2026 04:33:31 +0000 (0:00:03.655) 0:00:24.359 ***** 2026-02-05 04:33:35.498489 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:33:35.498493 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:33:35.498497 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:33:35.498501 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:33:35.498505 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:33:35.498509 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:33:35.498513 | orchestrator | 2026-02-05 04:33:35.498517 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-05 04:33:35.498530 | orchestrator | Thursday 05 February 2026 04:33:33 +0000 (0:00:02.376) 0:00:26.735 ***** 2026-02-05 04:33:35.498534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:35.498541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:35.498549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:35.498555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:35.498560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:35.498568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 04:33:39.093583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:39.093691 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:39.093703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:39.093725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:39.093733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:39.093754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 04:33:39.093763 | orchestrator | 2026-02-05 04:33:39.093772 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-05 04:33:39.093781 | orchestrator | Thursday 05 February 2026 04:33:36 +0000 (0:00:03.338) 0:00:30.074 ***** 2026-02-05 04:33:39.093796 | orchestrator | changed: [testbed-node-0] => { 2026-02-05 04:33:39.093804 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:33:39.093812 | orchestrator | } 2026-02-05 04:33:39.093820 | orchestrator | changed: [testbed-node-1] => { 2026-02-05 04:33:39.093827 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:33:39.093834 | orchestrator | } 2026-02-05 04:33:39.093841 | orchestrator | changed: [testbed-node-2] => { 2026-02-05 04:33:39.093848 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:33:39.093856 | orchestrator | } 2026-02-05 04:33:39.093863 | orchestrator | changed: [testbed-node-3] => { 2026-02-05 04:33:39.093870 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:33:39.093877 | orchestrator | } 2026-02-05 04:33:39.093884 | orchestrator | changed: [testbed-node-4] => { 2026-02-05 04:33:39.093891 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:33:39.093899 | orchestrator | } 2026-02-05 04:33:39.093906 | orchestrator | changed: [testbed-node-5] => { 2026-02-05 04:33:39.093913 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:33:39.093920 | orchestrator | } 2026-02-05 04:33:39.093971 | orchestrator | 2026-02-05 04:33:39.093979 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-05 04:33:39.093987 | orchestrator | Thursday 05 February 2026 04:33:38 +0000 (0:00:01.903) 0:00:31.978 ***** 2026-02-05 04:33:39.093995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-05 04:33:39.094009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-05 04:33:39.094064 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:33:39.094072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-05 04:33:39.094081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-05 04:33:39.094100 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:34:10.693566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-05 04:34:10.693687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-05 04:34:10.693708 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:34:10.693753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-05 04:34:10.693776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-05 04:34:10.693795 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:34:10.693814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-05 04:34:10.693888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-05 04:34:10.693911 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:34:10.693932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-05 04:34:10.693981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-05 04:34:10.694000 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:34:10.694086 | orchestrator | 2026-02-05 04:34:10.694110 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 04:34:10.694132 | orchestrator | Thursday 05 February 2026 04:33:41 +0000 (0:00:02.522) 0:00:34.500 ***** 2026-02-05 04:34:10.694255 | orchestrator | 2026-02-05 04:34:10.694276 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 04:34:10.694301 | orchestrator | Thursday 05 February 2026 04:33:41 +0000 (0:00:00.518) 0:00:35.019 ***** 2026-02-05 04:34:10.694315 | orchestrator | 2026-02-05 04:34:10.694328 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 04:34:10.694341 | orchestrator | Thursday 05 February 2026 04:33:42 +0000 (0:00:00.514) 0:00:35.533 ***** 2026-02-05 04:34:10.694354 | orchestrator | 2026-02-05 04:34:10.694367 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 04:34:10.694378 | orchestrator | Thursday 05 February 2026 04:33:42 +0000 (0:00:00.528) 0:00:36.062 ***** 2026-02-05 04:34:10.694389 | orchestrator | 2026-02-05 04:34:10.694400 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 04:34:10.694411 | orchestrator | Thursday 05 February 2026 04:33:43 +0000 (0:00:00.708) 0:00:36.770 ***** 2026-02-05 04:34:10.694422 | orchestrator | 2026-02-05 04:34:10.694432 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 04:34:10.694443 | orchestrator | Thursday 05 February 2026 04:33:43 +0000 (0:00:00.542) 0:00:37.313 ***** 2026-02-05 04:34:10.694468 | orchestrator | 2026-02-05 04:34:10.694479 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-05 04:34:10.694491 | orchestrator | Thursday 05 February 2026 04:33:44 +0000 (0:00:00.870) 0:00:38.184 ***** 2026-02-05 04:34:10.694502 | orchestrator | changed: [testbed-node-5] 2026-02-05 04:34:10.694512 | orchestrator | changed: [testbed-node-4] 2026-02-05 04:34:10.694523 | orchestrator | changed: [testbed-node-3] 2026-02-05 04:34:10.694535 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:34:10.694546 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:34:10.694556 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:34:10.694567 | orchestrator | 2026-02-05 04:34:10.694578 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-05 04:34:10.694591 | orchestrator | Thursday 05 February 2026 04:33:56 +0000 (0:00:11.894) 0:00:50.078 ***** 2026-02-05 04:34:10.694602 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:34:10.694614 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:34:10.694624 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:34:10.694635 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:34:10.694646 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:34:10.694657 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:34:10.694668 | orchestrator | 2026-02-05 04:34:10.694679 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-05 04:34:10.694690 | orchestrator | Thursday 05 February 2026 04:33:59 +0000 (0:00:02.351) 0:00:52.430 ***** 2026-02-05 04:34:10.694700 | orchestrator | changed: [testbed-node-4] 2026-02-05 04:34:10.694711 | orchestrator | changed: [testbed-node-3] 2026-02-05 04:34:10.694722 | orchestrator | changed: [testbed-node-5] 2026-02-05 04:34:10.694733 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:34:10.694744 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:34:10.694754 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:34:10.694765 | orchestrator | 2026-02-05 04:34:10.694776 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-05 04:34:10.694802 | orchestrator | Thursday 05 February 2026 04:34:10 +0000 (0:00:11.593) 0:01:04.023 ***** 2026-02-05 04:34:26.636204 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-05 04:34:26.636316 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-05 04:34:26.636333 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-05 04:34:26.636345 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-05 04:34:26.636356 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-05 04:34:26.636367 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-05 04:34:26.636379 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-05 04:34:26.636390 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-05 04:34:26.636400 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-05 04:34:26.636411 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-05 04:34:26.636422 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-05 04:34:26.636433 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-05 04:34:26.636444 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 04:34:26.636455 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 04:34:26.636492 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 04:34:26.636504 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 04:34:26.636515 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 04:34:26.636526 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 04:34:26.636537 | orchestrator | 2026-02-05 04:34:26.636565 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-05 04:34:26.636578 | orchestrator | Thursday 05 February 2026 04:34:18 +0000 (0:00:08.009) 0:01:12.033 ***** 2026-02-05 04:34:26.636590 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-05 04:34:26.636602 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:34:26.636614 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-05 04:34:26.636625 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:34:26.636635 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-05 04:34:26.636646 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:34:26.636657 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-05 04:34:26.636668 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-05 04:34:26.636679 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-05 04:34:26.636690 | orchestrator | 2026-02-05 04:34:26.636702 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-05 04:34:26.636712 | orchestrator | Thursday 05 February 2026 04:34:21 +0000 (0:00:03.311) 0:01:15.345 ***** 2026-02-05 04:34:26.636723 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-05 04:34:26.636734 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:34:26.636745 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-05 04:34:26.636756 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:34:26.636767 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-05 04:34:26.636778 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:34:26.636789 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-05 04:34:26.636800 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-05 04:34:26.636810 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-05 04:34:26.636821 | orchestrator | 2026-02-05 04:34:26.636832 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 04:34:26.636844 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 04:34:26.636857 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 04:34:26.636868 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 04:34:26.636879 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 04:34:26.636909 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 04:34:26.636920 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 04:34:26.636932 | orchestrator | 2026-02-05 04:34:26.636996 | orchestrator | 2026-02-05 04:34:26.637008 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 04:34:26.637027 | orchestrator | Thursday 05 February 2026 04:34:26 +0000 (0:00:04.239) 0:01:19.585 ***** 2026-02-05 04:34:26.637038 | orchestrator | =============================================================================== 2026-02-05 04:34:26.637049 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.89s 2026-02-05 04:34:26.637060 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.59s 2026-02-05 04:34:26.637070 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.01s 2026-02-05 04:34:26.637081 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.24s 2026-02-05 04:34:26.637092 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.68s 2026-02-05 04:34:26.637110 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.66s 2026-02-05 04:34:26.637129 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.34s 2026-02-05 04:34:26.637150 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.31s 2026-02-05 04:34:26.637170 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.61s 2026-02-05 04:34:26.637188 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.61s 2026-02-05 04:34:26.637203 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.55s 2026-02-05 04:34:26.637214 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.53s 2026-02-05 04:34:26.637225 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.52s 2026-02-05 04:34:26.637236 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.52s 2026-02-05 04:34:26.637247 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.38s 2026-02-05 04:34:26.637257 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.35s 2026-02-05 04:34:26.637268 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.31s 2026-02-05 04:34:26.637279 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.19s 2026-02-05 04:34:26.637290 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.90s 2026-02-05 04:34:26.637307 | orchestrator | module-load : Load modules ---------------------------------------------- 1.72s 2026-02-05 04:34:26.985401 | orchestrator | + osism apply -a upgrade ovn 2026-02-05 04:34:28.959166 | orchestrator | 2026-02-05 04:34:28 | INFO  | Task 41fe07b4-172f-4ecb-b2de-8c0271e386e5 (ovn) was prepared for execution. 2026-02-05 04:34:28.959248 | orchestrator | 2026-02-05 04:34:28 | INFO  | It takes a moment until task 41fe07b4-172f-4ecb-b2de-8c0271e386e5 (ovn) has been started and output is visible here. 2026-02-05 04:34:49.308170 | orchestrator | 2026-02-05 04:34:49.308286 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 04:34:49.308303 | orchestrator | 2026-02-05 04:34:49.308315 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 04:34:49.308326 | orchestrator | Thursday 05 February 2026 04:34:33 +0000 (0:00:01.242) 0:00:01.242 ***** 2026-02-05 04:34:49.308335 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:34:49.308346 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:34:49.308355 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:34:49.308365 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:34:49.308375 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:34:49.308386 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:34:49.308396 | orchestrator | 2026-02-05 04:34:49.308407 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 04:34:49.308416 | orchestrator | Thursday 05 February 2026 04:34:37 +0000 (0:00:03.186) 0:00:04.429 ***** 2026-02-05 04:34:49.308426 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-05 04:34:49.308437 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-05 04:34:49.308446 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-05 04:34:49.308481 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-05 04:34:49.308491 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-05 04:34:49.308500 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-05 04:34:49.308510 | orchestrator | 2026-02-05 04:34:49.308519 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-05 04:34:49.308527 | orchestrator | 2026-02-05 04:34:49.308535 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-05 04:34:49.308544 | orchestrator | Thursday 05 February 2026 04:34:39 +0000 (0:00:02.497) 0:00:06.926 ***** 2026-02-05 04:34:49.308552 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 04:34:49.308561 | orchestrator | 2026-02-05 04:34:49.308569 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-05 04:34:49.308577 | orchestrator | Thursday 05 February 2026 04:34:42 +0000 (0:00:02.805) 0:00:09.732 ***** 2026-02-05 04:34:49.308587 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308598 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308606 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308615 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308638 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308663 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308672 | orchestrator | 2026-02-05 04:34:49.308680 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-05 04:34:49.308695 | orchestrator | Thursday 05 February 2026 04:34:44 +0000 (0:00:02.418) 0:00:12.150 ***** 2026-02-05 04:34:49.308704 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308713 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308722 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308730 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308739 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308749 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308758 | orchestrator | 2026-02-05 04:34:49.308767 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-05 04:34:49.308775 | orchestrator | Thursday 05 February 2026 04:34:47 +0000 (0:00:02.332) 0:00:14.483 ***** 2026-02-05 04:34:49.308783 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308796 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:49.308816 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.866852 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867011 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867029 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867041 | orchestrator | 2026-02-05 04:34:56.867056 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-05 04:34:56.867068 | orchestrator | Thursday 05 February 2026 04:34:49 +0000 (0:00:02.118) 0:00:16.602 ***** 2026-02-05 04:34:56.867082 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867102 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867121 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867140 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867213 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867259 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867272 | orchestrator | 2026-02-05 04:34:56.867283 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-05 04:34:56.867295 | orchestrator | Thursday 05 February 2026 04:34:52 +0000 (0:00:02.948) 0:00:19.550 ***** 2026-02-05 04:34:56.867308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867406 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:34:56.867453 | orchestrator | 2026-02-05 04:34:56.867473 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-05 04:34:56.867488 | orchestrator | Thursday 05 February 2026 04:34:54 +0000 (0:00:02.576) 0:00:22.127 ***** 2026-02-05 04:34:56.867502 | orchestrator | changed: [testbed-node-0] => { 2026-02-05 04:34:56.867516 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:34:56.867530 | orchestrator | } 2026-02-05 04:34:56.867544 | orchestrator | changed: [testbed-node-1] => { 2026-02-05 04:34:56.867556 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:34:56.867575 | orchestrator | } 2026-02-05 04:34:56.867603 | orchestrator | changed: [testbed-node-2] => { 2026-02-05 04:34:56.867622 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:34:56.867639 | orchestrator | } 2026-02-05 04:34:56.867656 | orchestrator | changed: [testbed-node-3] => { 2026-02-05 04:34:56.867675 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:34:56.867694 | orchestrator | } 2026-02-05 04:34:56.867714 | orchestrator | changed: [testbed-node-4] => { 2026-02-05 04:34:56.867733 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:34:56.867752 | orchestrator | } 2026-02-05 04:34:56.867763 | orchestrator | changed: [testbed-node-5] => { 2026-02-05 04:34:56.867774 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:34:56.867784 | orchestrator | } 2026-02-05 04:34:56.867795 | orchestrator | 2026-02-05 04:34:56.867806 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-05 04:34:56.867817 | orchestrator | Thursday 05 February 2026 04:34:56 +0000 (0:00:01.901) 0:00:24.028 ***** 2026-02-05 04:34:56.867840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:35:29.370459 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:35:29.370575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:35:29.370590 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:35:29.370597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:35:29.370605 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:35:29.370611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:35:29.370618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:35:29.370644 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:35:29.370651 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:35:29.370657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:35:29.370663 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:35:29.370669 | orchestrator | 2026-02-05 04:35:29.370677 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-05 04:35:29.370685 | orchestrator | Thursday 05 February 2026 04:34:59 +0000 (0:00:02.446) 0:00:26.475 ***** 2026-02-05 04:35:29.370691 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:35:29.370699 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:35:29.370705 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:35:29.370711 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:35:29.370717 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:35:29.370723 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:35:29.370729 | orchestrator | 2026-02-05 04:35:29.370735 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-05 04:35:29.370742 | orchestrator | Thursday 05 February 2026 04:35:03 +0000 (0:00:03.955) 0:00:30.431 ***** 2026-02-05 04:35:29.370749 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-05 04:35:29.370769 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-05 04:35:29.370775 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-05 04:35:29.370782 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-05 04:35:29.370788 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-05 04:35:29.370794 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-05 04:35:29.370800 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 04:35:29.370806 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 04:35:29.370812 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 04:35:29.370818 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 04:35:29.370823 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 04:35:29.370843 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 04:35:29.370851 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-05 04:35:29.370859 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-05 04:35:29.370865 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-05 04:35:29.370872 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-05 04:35:29.370879 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-05 04:35:29.370885 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-05 04:35:29.370900 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 04:35:29.370906 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 04:35:29.370912 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 04:35:29.370918 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 04:35:29.370924 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 04:35:29.370930 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 04:35:29.370936 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 04:35:29.370942 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 04:35:29.370967 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 04:35:29.370974 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 04:35:29.370979 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 04:35:29.370985 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 04:35:29.370991 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 04:35:29.370998 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 04:35:29.371004 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 04:35:29.371010 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 04:35:29.371016 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 04:35:29.371023 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-05 04:35:29.371029 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 04:35:29.371035 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-05 04:35:29.371042 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-05 04:35:29.371048 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-05 04:35:29.371060 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-05 04:35:29.371067 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-05 04:35:29.371079 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-05 04:35:29.371085 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-05 04:35:29.371091 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-05 04:35:29.371097 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-05 04:35:29.371102 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-05 04:35:29.371120 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-05 04:38:17.268509 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-05 04:38:17.268663 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-05 04:38:17.268681 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-05 04:38:17.268692 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-05 04:38:17.268703 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-05 04:38:17.268714 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-05 04:38:17.268724 | orchestrator | 2026-02-05 04:38:17.268737 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 04:38:17.268756 | orchestrator | Thursday 05 February 2026 04:35:26 +0000 (0:00:23.121) 0:00:53.552 ***** 2026-02-05 04:38:17.268766 | orchestrator | 2026-02-05 04:38:17.268777 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 04:38:17.268787 | orchestrator | Thursday 05 February 2026 04:35:26 +0000 (0:00:00.469) 0:00:54.022 ***** 2026-02-05 04:38:17.268798 | orchestrator | 2026-02-05 04:38:17.268808 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 04:38:17.268817 | orchestrator | Thursday 05 February 2026 04:35:27 +0000 (0:00:00.472) 0:00:54.495 ***** 2026-02-05 04:38:17.268828 | orchestrator | 2026-02-05 04:38:17.268837 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 04:38:17.268847 | orchestrator | Thursday 05 February 2026 04:35:27 +0000 (0:00:00.453) 0:00:54.948 ***** 2026-02-05 04:38:17.268857 | orchestrator | 2026-02-05 04:38:17.268867 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 04:38:17.268876 | orchestrator | Thursday 05 February 2026 04:35:28 +0000 (0:00:00.421) 0:00:55.370 ***** 2026-02-05 04:38:17.268886 | orchestrator | 2026-02-05 04:38:17.268896 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 04:38:17.268906 | orchestrator | Thursday 05 February 2026 04:35:28 +0000 (0:00:00.448) 0:00:55.818 ***** 2026-02-05 04:38:17.268915 | orchestrator | 2026-02-05 04:38:17.268925 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-05 04:38:17.268935 | orchestrator | Thursday 05 February 2026 04:35:29 +0000 (0:00:00.806) 0:00:56.624 ***** 2026-02-05 04:38:17.268945 | orchestrator | 2026-02-05 04:38:17.268955 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-02-05 04:38:17.268966 | orchestrator | changed: [testbed-node-4] 2026-02-05 04:38:17.268977 | orchestrator | changed: [testbed-node-3] 2026-02-05 04:38:17.269063 | orchestrator | changed: [testbed-node-5] 2026-02-05 04:38:17.269081 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:38:17.269095 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:38:17.269106 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:38:17.269118 | orchestrator | 2026-02-05 04:38:17.269130 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-05 04:38:17.269142 | orchestrator | 2026-02-05 04:38:17.269154 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-05 04:38:17.269165 | orchestrator | Thursday 05 February 2026 04:37:41 +0000 (0:02:12.120) 0:03:08.745 ***** 2026-02-05 04:38:17.269175 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:38:17.269185 | orchestrator | 2026-02-05 04:38:17.269195 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-05 04:38:17.269205 | orchestrator | Thursday 05 February 2026 04:37:43 +0000 (0:00:01.895) 0:03:10.640 ***** 2026-02-05 04:38:17.269242 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 04:38:17.269253 | orchestrator | 2026-02-05 04:38:17.269263 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-05 04:38:17.269272 | orchestrator | Thursday 05 February 2026 04:37:45 +0000 (0:00:01.839) 0:03:12.480 ***** 2026-02-05 04:38:17.269282 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.269308 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.269318 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.269328 | orchestrator | 2026-02-05 04:38:17.269338 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-05 04:38:17.269347 | orchestrator | Thursday 05 February 2026 04:37:47 +0000 (0:00:01.835) 0:03:14.315 ***** 2026-02-05 04:38:17.269357 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.269367 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.269376 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.269386 | orchestrator | 2026-02-05 04:38:17.269395 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-05 04:38:17.269405 | orchestrator | Thursday 05 February 2026 04:37:48 +0000 (0:00:01.361) 0:03:15.677 ***** 2026-02-05 04:38:17.269415 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.269424 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.269434 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.269443 | orchestrator | 2026-02-05 04:38:17.269453 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-05 04:38:17.269463 | orchestrator | Thursday 05 February 2026 04:37:49 +0000 (0:00:01.358) 0:03:17.035 ***** 2026-02-05 04:38:17.269472 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.269483 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.269492 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.269502 | orchestrator | 2026-02-05 04:38:17.269512 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-05 04:38:17.269522 | orchestrator | Thursday 05 February 2026 04:37:51 +0000 (0:00:01.591) 0:03:18.626 ***** 2026-02-05 04:38:17.269550 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.269561 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.269571 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.269580 | orchestrator | 2026-02-05 04:38:17.269590 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-05 04:38:17.269600 | orchestrator | Thursday 05 February 2026 04:37:52 +0000 (0:00:01.343) 0:03:19.970 ***** 2026-02-05 04:38:17.269611 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:38:17.269628 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:38:17.269643 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:38:17.269657 | orchestrator | 2026-02-05 04:38:17.269673 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-05 04:38:17.269689 | orchestrator | Thursday 05 February 2026 04:37:53 +0000 (0:00:01.332) 0:03:21.302 ***** 2026-02-05 04:38:17.269707 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.269723 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.269738 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.269753 | orchestrator | 2026-02-05 04:38:17.269771 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-05 04:38:17.269787 | orchestrator | Thursday 05 February 2026 04:37:55 +0000 (0:00:01.795) 0:03:23.098 ***** 2026-02-05 04:38:17.269804 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.269820 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.269836 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.269850 | orchestrator | 2026-02-05 04:38:17.269865 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-05 04:38:17.269881 | orchestrator | Thursday 05 February 2026 04:37:57 +0000 (0:00:01.638) 0:03:24.736 ***** 2026-02-05 04:38:17.269897 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.269913 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.269930 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.269960 | orchestrator | 2026-02-05 04:38:17.269975 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-05 04:38:17.270079 | orchestrator | Thursday 05 February 2026 04:37:59 +0000 (0:00:01.943) 0:03:26.680 ***** 2026-02-05 04:38:17.270091 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.270101 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.270110 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.270120 | orchestrator | 2026-02-05 04:38:17.270187 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-05 04:38:17.270198 | orchestrator | Thursday 05 February 2026 04:38:00 +0000 (0:00:01.403) 0:03:28.083 ***** 2026-02-05 04:38:17.270208 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:38:17.270218 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:38:17.270228 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:38:17.270237 | orchestrator | 2026-02-05 04:38:17.270247 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-05 04:38:17.270257 | orchestrator | Thursday 05 February 2026 04:38:02 +0000 (0:00:01.315) 0:03:29.399 ***** 2026-02-05 04:38:17.270267 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:38:17.270277 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:38:17.270287 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:38:17.270296 | orchestrator | 2026-02-05 04:38:17.270306 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-05 04:38:17.270316 | orchestrator | Thursday 05 February 2026 04:38:03 +0000 (0:00:01.324) 0:03:30.723 ***** 2026-02-05 04:38:17.270326 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.270336 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.270345 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.270355 | orchestrator | 2026-02-05 04:38:17.270365 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-05 04:38:17.270374 | orchestrator | Thursday 05 February 2026 04:38:05 +0000 (0:00:01.748) 0:03:32.472 ***** 2026-02-05 04:38:17.270384 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.270394 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.270403 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.270413 | orchestrator | 2026-02-05 04:38:17.270422 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-05 04:38:17.270432 | orchestrator | Thursday 05 February 2026 04:38:06 +0000 (0:00:01.515) 0:03:33.987 ***** 2026-02-05 04:38:17.270441 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.270451 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.270461 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.270470 | orchestrator | 2026-02-05 04:38:17.270480 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-05 04:38:17.270490 | orchestrator | Thursday 05 February 2026 04:38:08 +0000 (0:00:02.173) 0:03:36.160 ***** 2026-02-05 04:38:17.270499 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:38:17.270509 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:38:17.270518 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:38:17.270528 | orchestrator | 2026-02-05 04:38:17.270538 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-05 04:38:17.270557 | orchestrator | Thursday 05 February 2026 04:38:10 +0000 (0:00:01.327) 0:03:37.488 ***** 2026-02-05 04:38:17.270567 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:38:17.270576 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:38:17.270587 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:38:17.270603 | orchestrator | 2026-02-05 04:38:17.270619 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-05 04:38:17.270634 | orchestrator | Thursday 05 February 2026 04:38:11 +0000 (0:00:01.343) 0:03:38.832 ***** 2026-02-05 04:38:17.270650 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:38:17.270663 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:38:17.270673 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:38:17.270687 | orchestrator | 2026-02-05 04:38:17.270703 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-05 04:38:17.270732 | orchestrator | Thursday 05 February 2026 04:38:13 +0000 (0:00:01.690) 0:03:40.522 ***** 2026-02-05 04:38:17.270770 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334495 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334598 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334614 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334627 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334639 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334669 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:23.334733 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:23.334757 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:23.334780 | orchestrator | 2026-02-05 04:38:23.334793 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-05 04:38:23.334805 | orchestrator | Thursday 05 February 2026 04:38:17 +0000 (0:00:04.040) 0:03:44.563 ***** 2026-02-05 04:38:23.334817 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334833 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334851 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334863 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:23.334881 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:37.336732 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:37.336871 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:37.336899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:37.336918 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:37.336960 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:37.337121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:37.337152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:37.337171 | orchestrator | 2026-02-05 04:38:37.337193 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-05 04:38:37.337213 | orchestrator | Thursday 05 February 2026 04:38:23 +0000 (0:00:06.067) 0:03:50.630 ***** 2026-02-05 04:38:37.337233 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-05 04:38:37.337253 | orchestrator | 2026-02-05 04:38:37.337273 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-05 04:38:37.337286 | orchestrator | Thursday 05 February 2026 04:38:25 +0000 (0:00:01.837) 0:03:52.468 ***** 2026-02-05 04:38:37.337300 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:38:37.337315 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:38:37.337347 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:38:37.337360 | orchestrator | 2026-02-05 04:38:37.337374 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-05 04:38:37.337387 | orchestrator | Thursday 05 February 2026 04:38:26 +0000 (0:00:01.755) 0:03:54.224 ***** 2026-02-05 04:38:37.337400 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:38:37.337412 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:38:37.337425 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:38:37.337437 | orchestrator | 2026-02-05 04:38:37.337449 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-05 04:38:37.337462 | orchestrator | Thursday 05 February 2026 04:38:29 +0000 (0:00:02.696) 0:03:56.920 ***** 2026-02-05 04:38:37.337474 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:38:37.337488 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:38:37.337501 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:38:37.337514 | orchestrator | 2026-02-05 04:38:37.337527 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-05 04:38:37.337544 | orchestrator | Thursday 05 February 2026 04:38:32 +0000 (0:00:02.586) 0:03:59.507 ***** 2026-02-05 04:38:37.337563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:37.337594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:37.337644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:37.337663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:37.337682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:37.337699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:37.337731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:41.745745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:41.745866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:41.745910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:38:41.745938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:41.745949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:41.745959 | orchestrator | 2026-02-05 04:38:41.745970 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-05 04:38:41.745981 | orchestrator | Thursday 05 February 2026 04:38:37 +0000 (0:00:05.113) 0:04:04.621 ***** 2026-02-05 04:38:41.746148 | orchestrator | changed: [testbed-node-0] => { 2026-02-05 04:38:41.746159 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:38:41.746168 | orchestrator | } 2026-02-05 04:38:41.746178 | orchestrator | changed: [testbed-node-1] => { 2026-02-05 04:38:41.746186 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:38:41.746195 | orchestrator | } 2026-02-05 04:38:41.746203 | orchestrator | changed: [testbed-node-2] => { 2026-02-05 04:38:41.746212 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:38:41.746221 | orchestrator | } 2026-02-05 04:38:41.746229 | orchestrator | 2026-02-05 04:38:41.746239 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-05 04:38:41.746247 | orchestrator | Thursday 05 February 2026 04:38:38 +0000 (0:00:01.356) 0:04:05.977 ***** 2026-02-05 04:38:41.746260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:41.746292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:41.746313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:41.746324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:41.746334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:41.746345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:41.746356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:41.746405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:41.746417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 04:38:41.746460 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 04:40:06.233421 | orchestrator | 2026-02-05 04:40:06.233523 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-05 04:40:06.233534 | orchestrator | Thursday 05 February 2026 04:38:41 +0000 (0:00:03.061) 0:04:09.038 ***** 2026-02-05 04:40:06.233541 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-05 04:40:06.233549 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-05 04:40:06.233556 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-05 04:40:06.233563 | orchestrator | 2026-02-05 04:40:06.233570 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-05 04:40:06.233577 | orchestrator | Thursday 05 February 2026 04:38:43 +0000 (0:00:02.235) 0:04:11.274 ***** 2026-02-05 04:40:06.233584 | orchestrator | changed: [testbed-node-0] => { 2026-02-05 04:40:06.233591 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:40:06.233598 | orchestrator | } 2026-02-05 04:40:06.233604 | orchestrator | changed: [testbed-node-1] => { 2026-02-05 04:40:06.233611 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:40:06.233617 | orchestrator | } 2026-02-05 04:40:06.233623 | orchestrator | changed: [testbed-node-2] => { 2026-02-05 04:40:06.233629 | orchestrator |  "msg": "Notifying handlers" 2026-02-05 04:40:06.233635 | orchestrator | } 2026-02-05 04:40:06.233641 | orchestrator | 2026-02-05 04:40:06.233648 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 04:40:06.233654 | orchestrator | Thursday 05 February 2026 04:38:45 +0000 (0:00:01.514) 0:04:12.789 ***** 2026-02-05 04:40:06.233660 | orchestrator | 2026-02-05 04:40:06.233666 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 04:40:06.233672 | orchestrator | Thursday 05 February 2026 04:38:45 +0000 (0:00:00.433) 0:04:13.223 ***** 2026-02-05 04:40:06.233679 | orchestrator | 2026-02-05 04:40:06.233685 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 04:40:06.233691 | orchestrator | Thursday 05 February 2026 04:38:46 +0000 (0:00:00.441) 0:04:13.665 ***** 2026-02-05 04:40:06.233697 | orchestrator | 2026-02-05 04:40:06.233703 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-05 04:40:06.233709 | orchestrator | Thursday 05 February 2026 04:38:47 +0000 (0:00:00.943) 0:04:14.608 ***** 2026-02-05 04:40:06.233729 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:40:06.233735 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:40:06.233741 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:40:06.233747 | orchestrator | 2026-02-05 04:40:06.233753 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-05 04:40:06.233760 | orchestrator | Thursday 05 February 2026 04:39:03 +0000 (0:00:15.822) 0:04:30.431 ***** 2026-02-05 04:40:06.233766 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:40:06.233772 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:40:06.233778 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:40:06.233784 | orchestrator | 2026-02-05 04:40:06.233790 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-05 04:40:06.233796 | orchestrator | Thursday 05 February 2026 04:39:19 +0000 (0:00:16.025) 0:04:46.456 ***** 2026-02-05 04:40:06.233803 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-05 04:40:06.233809 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-05 04:40:06.233814 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-05 04:40:06.233841 | orchestrator | 2026-02-05 04:40:06.233847 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-05 04:40:06.233853 | orchestrator | Thursday 05 February 2026 04:39:29 +0000 (0:00:10.404) 0:04:56.861 ***** 2026-02-05 04:40:06.233859 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:40:06.233865 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:40:06.233871 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:40:06.233877 | orchestrator | 2026-02-05 04:40:06.233883 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-05 04:40:06.233889 | orchestrator | Thursday 05 February 2026 04:39:45 +0000 (0:00:16.441) 0:05:13.303 ***** 2026-02-05 04:40:06.233895 | orchestrator | Pausing for 5 seconds 2026-02-05 04:40:06.233902 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:40:06.233908 | orchestrator | 2026-02-05 04:40:06.233914 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-05 04:40:06.233920 | orchestrator | Thursday 05 February 2026 04:39:52 +0000 (0:00:06.153) 0:05:19.456 ***** 2026-02-05 04:40:06.233927 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:40:06.233932 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:40:06.233939 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:40:06.233945 | orchestrator | 2026-02-05 04:40:06.233951 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-05 04:40:06.233957 | orchestrator | Thursday 05 February 2026 04:39:54 +0000 (0:00:01.885) 0:05:21.342 ***** 2026-02-05 04:40:06.233963 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:40:06.233970 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:40:06.233977 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:40:06.233983 | orchestrator | 2026-02-05 04:40:06.233989 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-05 04:40:06.233995 | orchestrator | Thursday 05 February 2026 04:39:55 +0000 (0:00:01.858) 0:05:23.200 ***** 2026-02-05 04:40:06.234119 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:40:06.234125 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:40:06.234131 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:40:06.234139 | orchestrator | 2026-02-05 04:40:06.234145 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-05 04:40:06.234151 | orchestrator | Thursday 05 February 2026 04:39:57 +0000 (0:00:01.817) 0:05:25.018 ***** 2026-02-05 04:40:06.234157 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:40:06.234163 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:40:06.234169 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:40:06.234176 | orchestrator | 2026-02-05 04:40:06.234182 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-05 04:40:06.234188 | orchestrator | Thursday 05 February 2026 04:39:59 +0000 (0:00:01.648) 0:05:26.667 ***** 2026-02-05 04:40:06.234195 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:40:06.234201 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:40:06.234207 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:40:06.234213 | orchestrator | 2026-02-05 04:40:06.234219 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-05 04:40:06.234241 | orchestrator | Thursday 05 February 2026 04:40:01 +0000 (0:00:01.803) 0:05:28.470 ***** 2026-02-05 04:40:06.234247 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:40:06.234253 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:40:06.234259 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:40:06.234265 | orchestrator | 2026-02-05 04:40:06.234271 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-05 04:40:06.234278 | orchestrator | Thursday 05 February 2026 04:40:03 +0000 (0:00:01.872) 0:05:30.342 ***** 2026-02-05 04:40:06.234284 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-05 04:40:06.234290 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-05 04:40:06.234296 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-05 04:40:06.234302 | orchestrator | 2026-02-05 04:40:06.234308 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 04:40:06.234315 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 04:40:06.234330 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-05 04:40:06.234336 | orchestrator | testbed-node-2 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 04:40:06.234343 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 04:40:06.234348 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 04:40:06.234359 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 04:40:06.234365 | orchestrator | 2026-02-05 04:40:06.234371 | orchestrator | 2026-02-05 04:40:06.234377 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 04:40:06.234383 | orchestrator | Thursday 05 February 2026 04:40:05 +0000 (0:00:02.841) 0:05:33.184 ***** 2026-02-05 04:40:06.234390 | orchestrator | =============================================================================== 2026-02-05 04:40:06.234396 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 132.12s 2026-02-05 04:40:06.234401 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.12s 2026-02-05 04:40:06.234408 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.44s 2026-02-05 04:40:06.234413 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.03s 2026-02-05 04:40:06.234420 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.82s 2026-02-05 04:40:06.234426 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 10.40s 2026-02-05 04:40:06.234432 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.15s 2026-02-05 04:40:06.234438 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.07s 2026-02-05 04:40:06.234444 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.11s 2026-02-05 04:40:06.234450 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.04s 2026-02-05 04:40:06.234456 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.96s 2026-02-05 04:40:06.234462 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.19s 2026-02-05 04:40:06.234468 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.07s 2026-02-05 04:40:06.234474 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.06s 2026-02-05 04:40:06.234481 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.95s 2026-02-05 04:40:06.234486 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.84s 2026-02-05 04:40:06.234493 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.81s 2026-02-05 04:40:06.234499 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.70s 2026-02-05 04:40:06.234505 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.59s 2026-02-05 04:40:06.234511 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.58s 2026-02-05 04:40:06.523146 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-05 04:40:06.523257 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-05 04:40:06.523280 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-05 04:40:06.531118 | orchestrator | + set -e 2026-02-05 04:40:06.531250 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 04:40:06.531271 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 04:40:06.531312 | orchestrator | ++ INTERACTIVE=false 2026-02-05 04:40:06.531322 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 04:40:06.531331 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 04:40:06.531350 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-05 04:40:08.586365 | orchestrator | 2026-02-05 04:40:08 | INFO  | Task e1c4e1b3-c793-4260-9e97-102aef9e8e7a (ceph-rolling_update) was prepared for execution. 2026-02-05 04:40:08.586436 | orchestrator | 2026-02-05 04:40:08 | INFO  | It takes a moment until task e1c4e1b3-c793-4260-9e97-102aef9e8e7a (ceph-rolling_update) has been started and output is visible here. 2026-02-05 04:41:32.819996 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 04:41:32.820153 | orchestrator | 2.16.14 2026-02-05 04:41:32.820168 | orchestrator | 2026-02-05 04:41:32.820178 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-05 04:41:32.820187 | orchestrator | 2026-02-05 04:41:32.820196 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-05 04:41:32.820205 | orchestrator | Thursday 05 February 2026 04:40:16 +0000 (0:00:01.559) 0:00:01.559 ***** 2026-02-05 04:41:32.820213 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-05 04:41:32.820221 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-05 04:41:32.820230 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-05 04:41:32.820239 | orchestrator | skipping: [localhost] 2026-02-05 04:41:32.820247 | orchestrator | 2026-02-05 04:41:32.820255 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-05 04:41:32.820263 | orchestrator | 2026-02-05 04:41:32.820271 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-05 04:41:32.820279 | orchestrator | Thursday 05 February 2026 04:40:18 +0000 (0:00:02.027) 0:00:03.587 ***** 2026-02-05 04:41:32.820287 | orchestrator | ok: [testbed-node-0] => { 2026-02-05 04:41:32.820295 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-05 04:41:32.820304 | orchestrator | } 2026-02-05 04:41:32.820312 | orchestrator | ok: [testbed-node-1] => { 2026-02-05 04:41:32.820320 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-05 04:41:32.820328 | orchestrator | } 2026-02-05 04:41:32.820335 | orchestrator | ok: [testbed-node-2] => { 2026-02-05 04:41:32.820343 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-05 04:41:32.820351 | orchestrator | } 2026-02-05 04:41:32.820359 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 04:41:32.820367 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-05 04:41:32.820375 | orchestrator | } 2026-02-05 04:41:32.820383 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 04:41:32.820390 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-05 04:41:32.820398 | orchestrator | } 2026-02-05 04:41:32.820406 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 04:41:32.820428 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-05 04:41:32.820436 | orchestrator | } 2026-02-05 04:41:32.820444 | orchestrator | ok: [testbed-manager] => { 2026-02-05 04:41:32.820452 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-05 04:41:32.820460 | orchestrator | } 2026-02-05 04:41:32.820468 | orchestrator | 2026-02-05 04:41:32.820476 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-05 04:41:32.820484 | orchestrator | Thursday 05 February 2026 04:40:25 +0000 (0:00:06.287) 0:00:09.874 ***** 2026-02-05 04:41:32.820491 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:41:32.820499 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:41:32.820507 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:41:32.820515 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:41:32.820524 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:41:32.820533 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:41:32.820563 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:32.820572 | orchestrator | 2026-02-05 04:41:32.820582 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-05 04:41:32.820591 | orchestrator | Thursday 05 February 2026 04:40:30 +0000 (0:00:05.492) 0:00:15.367 ***** 2026-02-05 04:41:32.820601 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:41:32.820609 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 04:41:32.820618 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 04:41:32.820628 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 04:41:32.820637 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 04:41:32.820647 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:41:32.820656 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:41:32.820666 | orchestrator | 2026-02-05 04:41:32.820675 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-05 04:41:32.820684 | orchestrator | Thursday 05 February 2026 04:41:02 +0000 (0:00:31.954) 0:00:47.322 ***** 2026-02-05 04:41:32.820694 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:32.820704 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:41:32.820713 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:41:32.820722 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:41:32.820731 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:41:32.820740 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:41:32.820749 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:32.820759 | orchestrator | 2026-02-05 04:41:32.820769 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 04:41:32.820778 | orchestrator | Thursday 05 February 2026 04:41:04 +0000 (0:00:02.094) 0:00:49.417 ***** 2026-02-05 04:41:32.820788 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-05 04:41:32.820799 | orchestrator | 2026-02-05 04:41:32.820809 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 04:41:32.820818 | orchestrator | Thursday 05 February 2026 04:41:07 +0000 (0:00:02.616) 0:00:52.033 ***** 2026-02-05 04:41:32.820828 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:41:32.820838 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:32.820852 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:41:32.820866 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:41:32.820879 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:41:32.820892 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:41:32.820904 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:32.820917 | orchestrator | 2026-02-05 04:41:32.820949 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 04:41:32.820963 | orchestrator | Thursday 05 February 2026 04:41:09 +0000 (0:00:02.467) 0:00:54.501 ***** 2026-02-05 04:41:32.820977 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:32.820990 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:41:32.821003 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:41:32.821035 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:41:32.821046 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:41:32.821059 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:41:32.821073 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:32.821082 | orchestrator | 2026-02-05 04:41:32.821090 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 04:41:32.821098 | orchestrator | Thursday 05 February 2026 04:41:11 +0000 (0:00:01.872) 0:00:56.373 ***** 2026-02-05 04:41:32.821106 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:32.821114 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:41:32.821121 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:41:32.821129 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:41:32.821146 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:41:32.821155 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:41:32.821162 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:32.821170 | orchestrator | 2026-02-05 04:41:32.821178 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 04:41:32.821187 | orchestrator | Thursday 05 February 2026 04:41:14 +0000 (0:00:02.736) 0:00:59.110 ***** 2026-02-05 04:41:32.821195 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:32.821202 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:41:32.821210 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:41:32.821218 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:41:32.821226 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:41:32.821234 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:41:32.821242 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:32.821250 | orchestrator | 2026-02-05 04:41:32.821258 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 04:41:32.821266 | orchestrator | Thursday 05 February 2026 04:41:16 +0000 (0:00:01.923) 0:01:01.033 ***** 2026-02-05 04:41:32.821274 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:32.821282 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:41:32.821290 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:41:32.821297 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:41:32.821309 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:41:32.821322 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:41:32.821333 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:32.821345 | orchestrator | 2026-02-05 04:41:32.821366 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 04:41:32.821379 | orchestrator | Thursday 05 February 2026 04:41:18 +0000 (0:00:02.071) 0:01:03.105 ***** 2026-02-05 04:41:32.821391 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:32.821405 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:41:32.821419 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:41:32.821432 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:41:32.821443 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:41:32.821451 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:41:32.821459 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:32.821466 | orchestrator | 2026-02-05 04:41:32.821474 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 04:41:32.821483 | orchestrator | Thursday 05 February 2026 04:41:20 +0000 (0:00:01.911) 0:01:05.017 ***** 2026-02-05 04:41:32.821491 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:41:32.821499 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:41:32.821507 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:41:32.821515 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:41:32.821523 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:41:32.821531 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:41:32.821538 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:41:32.821546 | orchestrator | 2026-02-05 04:41:32.821554 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 04:41:32.821562 | orchestrator | Thursday 05 February 2026 04:41:22 +0000 (0:00:02.040) 0:01:07.057 ***** 2026-02-05 04:41:32.821570 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:32.821578 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:41:32.821586 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:41:32.821594 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:41:32.821602 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:41:32.821609 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:41:32.821617 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:32.821625 | orchestrator | 2026-02-05 04:41:32.821633 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 04:41:32.821641 | orchestrator | Thursday 05 February 2026 04:41:24 +0000 (0:00:02.173) 0:01:09.231 ***** 2026-02-05 04:41:32.821649 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:41:32.821657 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:41:32.821671 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:41:32.821679 | orchestrator | 2026-02-05 04:41:32.821687 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 04:41:32.821695 | orchestrator | Thursday 05 February 2026 04:41:26 +0000 (0:00:01.655) 0:01:10.887 ***** 2026-02-05 04:41:32.821703 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:32.821710 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:41:32.821718 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:41:32.821726 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:41:32.821734 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:41:32.821741 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:41:32.821749 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:32.821757 | orchestrator | 2026-02-05 04:41:32.821765 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 04:41:32.821773 | orchestrator | Thursday 05 February 2026 04:41:28 +0000 (0:00:02.044) 0:01:12.931 ***** 2026-02-05 04:41:32.821781 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:41:32.821789 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:41:32.821797 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:41:32.821805 | orchestrator | 2026-02-05 04:41:32.821813 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 04:41:32.821820 | orchestrator | Thursday 05 February 2026 04:41:31 +0000 (0:00:03.334) 0:01:16.265 ***** 2026-02-05 04:41:32.821835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 04:41:55.034137 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 04:41:55.034235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 04:41:55.034247 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:41:55.034254 | orchestrator | 2026-02-05 04:41:55.034262 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 04:41:55.034272 | orchestrator | Thursday 05 February 2026 04:41:32 +0000 (0:00:01.363) 0:01:17.629 ***** 2026-02-05 04:41:55.034280 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 04:41:55.034289 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 04:41:55.034296 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 04:41:55.034302 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:41:55.034309 | orchestrator | 2026-02-05 04:41:55.034316 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 04:41:55.034322 | orchestrator | Thursday 05 February 2026 04:41:34 +0000 (0:00:01.851) 0:01:19.481 ***** 2026-02-05 04:41:55.034346 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 04:41:55.034357 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 04:41:55.034385 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 04:41:55.034392 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:41:55.034399 | orchestrator | 2026-02-05 04:41:55.034405 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 04:41:55.034411 | orchestrator | Thursday 05 February 2026 04:41:35 +0000 (0:00:01.191) 0:01:20.673 ***** 2026-02-05 04:41:55.034419 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'de37024be869', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 04:41:28.849340', 'end': '2026-02-05 04:41:28.905409', 'delta': '0:00:00.056069', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de37024be869'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 04:41:55.034444 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'df4012ab4a61', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 04:41:29.694683', 'end': '2026-02-05 04:41:29.739768', 'delta': '0:00:00.045085', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['df4012ab4a61'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 04:41:55.034451 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '458f6feaf079', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 04:41:30.263200', 'end': '2026-02-05 04:41:30.310005', 'delta': '0:00:00.046805', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['458f6feaf079'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 04:41:55.034457 | orchestrator | 2026-02-05 04:41:55.034463 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 04:41:55.034469 | orchestrator | Thursday 05 February 2026 04:41:37 +0000 (0:00:01.228) 0:01:21.901 ***** 2026-02-05 04:41:55.034475 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:55.034483 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:41:55.034489 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:41:55.034495 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:41:55.034501 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:41:55.034506 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:41:55.034512 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:55.034518 | orchestrator | 2026-02-05 04:41:55.034524 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 04:41:55.034536 | orchestrator | Thursday 05 February 2026 04:41:39 +0000 (0:00:02.441) 0:01:24.342 ***** 2026-02-05 04:41:55.034547 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:41:55.034552 | orchestrator | 2026-02-05 04:41:55.034557 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 04:41:55.034563 | orchestrator | Thursday 05 February 2026 04:41:40 +0000 (0:00:01.257) 0:01:25.600 ***** 2026-02-05 04:41:55.034569 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:55.034574 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:41:55.034580 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:41:55.034602 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:41:55.034610 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:41:55.034622 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:41:55.034629 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:55.034635 | orchestrator | 2026-02-05 04:41:55.034641 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 04:41:55.034648 | orchestrator | Thursday 05 February 2026 04:41:42 +0000 (0:00:01.966) 0:01:27.566 ***** 2026-02-05 04:41:55.034654 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:55.034662 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-05 04:41:55.034669 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 04:41:55.034676 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 04:41:55.034683 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-05 04:41:55.034690 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-05 04:41:55.034696 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 04:41:55.034703 | orchestrator | 2026-02-05 04:41:55.034710 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 04:41:55.034716 | orchestrator | Thursday 05 February 2026 04:41:46 +0000 (0:00:03.647) 0:01:31.214 ***** 2026-02-05 04:41:55.034723 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:41:55.034729 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:41:55.034736 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:41:55.034742 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:41:55.034748 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:41:55.034754 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:41:55.034761 | orchestrator | ok: [testbed-manager] 2026-02-05 04:41:55.034768 | orchestrator | 2026-02-05 04:41:55.034774 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 04:41:55.034781 | orchestrator | Thursday 05 February 2026 04:41:48 +0000 (0:00:02.121) 0:01:33.336 ***** 2026-02-05 04:41:55.034788 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:41:55.034795 | orchestrator | 2026-02-05 04:41:55.034803 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 04:41:55.034811 | orchestrator | Thursday 05 February 2026 04:41:49 +0000 (0:00:01.128) 0:01:34.465 ***** 2026-02-05 04:41:55.034818 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:41:55.034825 | orchestrator | 2026-02-05 04:41:55.034832 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 04:41:55.034838 | orchestrator | Thursday 05 February 2026 04:41:50 +0000 (0:00:01.197) 0:01:35.663 ***** 2026-02-05 04:41:55.034845 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:41:55.034851 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:41:55.034857 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:41:55.034863 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:41:55.034869 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:41:55.034875 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:41:55.034881 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:41:55.034887 | orchestrator | 2026-02-05 04:41:55.034893 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 04:41:55.034899 | orchestrator | Thursday 05 February 2026 04:41:53 +0000 (0:00:02.286) 0:01:37.950 ***** 2026-02-05 04:41:55.034905 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:41:55.034920 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:41:55.034928 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:41:55.034935 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:41:55.034942 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:41:55.034949 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:41:55.034965 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:05.462938 | orchestrator | 2026-02-05 04:42:05.463159 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 04:42:05.463191 | orchestrator | Thursday 05 February 2026 04:41:55 +0000 (0:00:01.892) 0:01:39.842 ***** 2026-02-05 04:42:05.463211 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:42:05.463232 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:42:05.463251 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:42:05.463268 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:05.463286 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:05.463305 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:05.463322 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:05.463341 | orchestrator | 2026-02-05 04:42:05.463358 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 04:42:05.463375 | orchestrator | Thursday 05 February 2026 04:41:57 +0000 (0:00:02.047) 0:01:41.890 ***** 2026-02-05 04:42:05.463395 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:42:05.463414 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:42:05.463434 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:42:05.463454 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:05.463476 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:05.463497 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:05.463518 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:05.463536 | orchestrator | 2026-02-05 04:42:05.463557 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 04:42:05.463576 | orchestrator | Thursday 05 February 2026 04:41:59 +0000 (0:00:02.097) 0:01:43.987 ***** 2026-02-05 04:42:05.463596 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:42:05.463616 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:42:05.463635 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:42:05.463653 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:05.463666 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:05.463679 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:05.463692 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:05.463705 | orchestrator | 2026-02-05 04:42:05.463718 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 04:42:05.463732 | orchestrator | Thursday 05 February 2026 04:42:01 +0000 (0:00:02.128) 0:01:46.115 ***** 2026-02-05 04:42:05.463763 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:42:05.463777 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:42:05.463790 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:42:05.463802 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:05.463813 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:05.463824 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:05.463834 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:05.463845 | orchestrator | 2026-02-05 04:42:05.463856 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 04:42:05.463868 | orchestrator | Thursday 05 February 2026 04:42:03 +0000 (0:00:01.853) 0:01:47.969 ***** 2026-02-05 04:42:05.463882 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:42:05.463900 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:42:05.463918 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:42:05.463936 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:05.463953 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:05.463972 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:05.463989 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:05.464006 | orchestrator | 2026-02-05 04:42:05.464058 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 04:42:05.464114 | orchestrator | Thursday 05 February 2026 04:42:05 +0000 (0:00:02.037) 0:01:50.006 ***** 2026-02-05 04:42:05.464131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.464147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.464158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.464219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 04:42:05.464258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.464279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.464298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.464333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7aa79787', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:42:05.464373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.464407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.600001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.600125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.600153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.600164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-36-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 04:42:05.600194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.600203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.600211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.600240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91e0d2c4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:42:05.600255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.600275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.600284 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:42:05.600294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.600303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.600311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.600320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 04:42:05.600335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.878161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.878250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.878263 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:42:05.878314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48b9971a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:42:05.878328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.878349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.878372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.878382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567', 'dm-uuid-LVM-rm93nYJXJvDmNv1mI2i0aCOQRWUNQlkCoPPr3WLpbHMBKwrxigfqk31Pio1T8A2M'], 'uuids': ['7cbe1ae0-472e-4015-9248-1616ea071c47'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M']}})  2026-02-05 04:42:05.878404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff', 'scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41a73991', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:42:05.878414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-VPbbSc-FYsx-oCa5-EK96-LSd2-FMne-gw3pzp', 'scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2', 'scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e']}})  2026-02-05 04:42:05.878424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.878432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.878441 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:42:05.878450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 04:42:05.878465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.912464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV', 'dm-uuid-CRYPT-LUKS2-24caf7b252c344f2a02a18860df8d987-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 04:42:05.912631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.912654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e', 'dm-uuid-LVM-gjVz64L0xYhHucIQrbSWO4IaXeskE9njVHEBOKPFjChmvGixI0fMAnchfE228jrV'], 'uuids': ['24caf7b2-52c3-44f2-a02a-18860df8d987'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV']}})  2026-02-05 04:42:05.912668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-30TRfy-AcTU-PjNY-ZSvI-Ms8S-pTLw-T1Q2CW', 'scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b', 'scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567']}})  2026-02-05 04:42:05.912681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.912727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5fa98ac', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:42:05.912763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.912783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.912803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M', 'dm-uuid-CRYPT-LUKS2-7cbe1ae0472e401592481616ea071c47-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 04:42:05.912823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:05.912844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c', 'dm-uuid-LVM-5TLZe1Tgo1TKM8GkjUpfN78ieh5w0ANrQNgi2dmi5diYRe7Lgm9DH3wMJKHbVGFu'], 'uuids': ['4b1d437a-dc47-4238-b645-763e611994c7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu']}})  2026-02-05 04:42:05.912877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a', 'scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '64f88b59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:42:06.113710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-K9GKOz-fxxR-Pm8N-aWMy-HniX-e8kz-eif3cf', 'scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a', 'scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625']}})  2026-02-05 04:42:06.113783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:06.113792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:06.113799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 04:42:06.113805 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:06.113811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:06.113816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ', 'dm-uuid-CRYPT-LUKS2-2c590a41d7cb49b2bfdc5ce322fde490-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 04:42:06.113822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:06.113837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625', 'dm-uuid-LVM-9Y06a2zVor1lRD1cyPlucPXWC0aPbN2JxLYAdcU08G9AXF4NeOKXZ9V1sHvTv2MQ'], 'uuids': ['2c590a41-d7cb-49b2-bfdc-5ce322fde490'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ']}})  2026-02-05 04:42:06.113864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Pz8pQL-5OmI-WkJt-J5Qa-2PBj-Qacj-FgSo8f', 'scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930', 'scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c']}})  2026-02-05 04:42:06.113870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:06.113878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5aaaa4a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:42:06.113907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:06.262262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:06.262399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu', 'dm-uuid-CRYPT-LUKS2-4b1d437adc474238b645763e611994c7-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 04:42:06.262424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:06.262442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565', 'dm-uuid-LVM-vN6SqmnZs4OEgki7muUGb3CX2rpgO9JjiNwKDjdU3U6P9o8RLpsOeeot25aaAr4C'], 'uuids': ['85a8f83c-eeb5-49b7-8fd6-02ada4ea1f5a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C']}})  2026-02-05 04:42:06.262458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc', 'scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b9ba281', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:42:06.262473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-s8rEz7-ppR5-3mX9-9SVK-AT2X-wlWd-qt0ARf', 'scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9', 'scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be']}})  2026-02-05 04:42:06.262524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:06.262560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:06.262584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 04:42:06.262600 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:06.262615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:06.262629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS', 'dm-uuid-CRYPT-LUKS2-39f72013c68f483e935747f3038f3162-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 04:42:06.262643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:06.262658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be', 'dm-uuid-LVM-2cW2aDbCF7Qvd1HDyT5MPDeJBzJFIyWajOrxUSy4sPZH0JqYli0dE22RqjUl99AS'], 'uuids': ['39f72013-c68f-483e-9357-47f3038f3162'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS']}})  2026-02-05 04:42:06.262672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-j8R0nG-W0YC-WK20-RGGA-JPgY-3scR-ZQIgrc', 'scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49', 'scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565']}})  2026-02-05 04:42:06.262707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:07.435403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62c048b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:42:07.435536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:07.435562 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:07.435611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:07.435631 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:07.435675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C', 'dm-uuid-CRYPT-LUKS2-85a8f83ceeb549b78fd602ada4ea1f5a-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 04:42:07.435707 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:07.435726 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:07.435747 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-23-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 04:42:07.435766 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:07.435784 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:07.435803 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:07.435854 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4', 'scsi-SQEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6fc2347', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part16', 'scsi-SQEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part14', 'scsi-SQEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part15', 'scsi-SQEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part1', 'scsi-SQEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:42:07.831652 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:07.831740 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:42:07.831751 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:07.831760 | orchestrator | 2026-02-05 04:42:07.831768 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 04:42:07.831776 | orchestrator | Thursday 05 February 2026 04:42:07 +0000 (0:00:02.240) 0:01:52.247 ***** 2026-02-05 04:42:07.831785 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:07.831816 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:07.831824 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:07.831832 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:07.831864 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:07.831872 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:07.831879 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:07.831888 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7aa79787', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:07.831912 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.052851 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.052961 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:42:08.052981 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.053130 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.053148 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.053161 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-36-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.053189 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.053227 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.053248 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.053287 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91e0d2c4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.053321 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.053344 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.312876 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:42:08.312952 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.312982 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.312989 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.312998 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.313064 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.313073 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.313092 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.313152 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48b9971a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.313173 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.313182 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.313191 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:42:08.313217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.469502 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567', 'dm-uuid-LVM-rm93nYJXJvDmNv1mI2i0aCOQRWUNQlkCoPPr3WLpbHMBKwrxigfqk31Pio1T8A2M'], 'uuids': ['7cbe1ae0-472e-4015-9248-1616ea071c47'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M']}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.469604 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff', 'scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41a73991', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.469631 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-VPbbSc-FYsx-oCa5-EK96-LSd2-FMne-gw3pzp', 'scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2', 'scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e']}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.469641 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.469673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.469693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.469700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.469706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV', 'dm-uuid-CRYPT-LUKS2-24caf7b252c344f2a02a18860df8d987-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.469716 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.469723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e', 'dm-uuid-LVM-gjVz64L0xYhHucIQrbSWO4IaXeskE9njVHEBOKPFjChmvGixI0fMAnchfE228jrV'], 'uuids': ['24caf7b2-52c3-44f2-a02a-18860df8d987'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV']}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.469741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-30TRfy-AcTU-PjNY-ZSvI-Ms8S-pTLw-T1Q2CW', 'scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b', 'scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567']}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.532863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.532959 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.532988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5fa98ac', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.533077 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c', 'dm-uuid-LVM-5TLZe1Tgo1TKM8GkjUpfN78ieh5w0ANrQNgi2dmi5diYRe7Lgm9DH3wMJKHbVGFu'], 'uuids': ['4b1d437a-dc47-4238-b645-763e611994c7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu']}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.533089 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.533099 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a', 'scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '64f88b59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.533113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.533121 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-K9GKOz-fxxR-Pm8N-aWMy-HniX-e8kz-eif3cf', 'scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a', 'scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625']}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.533142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M', 'dm-uuid-CRYPT-LUKS2-7cbe1ae0472e401592481616ea071c47-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.636405 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.636513 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.636526 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.636535 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:08.636545 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.636575 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ', 'dm-uuid-CRYPT-LUKS2-2c590a41d7cb49b2bfdc5ce322fde490-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.636584 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.636647 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625', 'dm-uuid-LVM-9Y06a2zVor1lRD1cyPlucPXWC0aPbN2JxLYAdcU08G9AXF4NeOKXZ9V1sHvTv2MQ'], 'uuids': ['2c590a41-d7cb-49b2-bfdc-5ce322fde490'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ']}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.637346 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Pz8pQL-5OmI-WkJt-J5Qa-2PBj-Qacj-FgSo8f', 'scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930', 'scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c']}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.637372 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.637397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5aaaa4a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.731179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.731276 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.731311 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565', 'dm-uuid-LVM-vN6SqmnZs4OEgki7muUGb3CX2rpgO9JjiNwKDjdU3U6P9o8RLpsOeeot25aaAr4C'], 'uuids': ['85a8f83c-eeb5-49b7-8fd6-02ada4ea1f5a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C']}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.731346 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc', 'scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b9ba281', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.731359 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.731389 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-s8rEz7-ppR5-3mX9-9SVK-AT2X-wlWd-qt0ARf', 'scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9', 'scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be']}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.731405 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu', 'dm-uuid-CRYPT-LUKS2-4b1d437adc474238b645763e611994c7-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.731423 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.731442 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.731454 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.731466 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.731487 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS', 'dm-uuid-CRYPT-LUKS2-39f72013c68f483e935747f3038f3162-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.836282 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.836443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be', 'dm-uuid-LVM-2cW2aDbCF7Qvd1HDyT5MPDeJBzJFIyWajOrxUSy4sPZH0JqYli0dE22RqjUl99AS'], 'uuids': ['39f72013-c68f-483e-9357-47f3038f3162'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS']}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.836468 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:08.836484 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-j8R0nG-W0YC-WK20-RGGA-JPgY-3scR-ZQIgrc', 'scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49', 'scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565']}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.836499 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.836538 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62c048b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.836564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.836577 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.836589 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.836602 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C', 'dm-uuid-CRYPT-LUKS2-85a8f83ceeb549b78fd602ada4ea1f5a-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:08.836610 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:08.836623 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:16.766110 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:16.766205 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-23-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:16.766216 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:16.766224 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:16.766231 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:16.766260 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4', 'scsi-SQEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6fc2347', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part16', 'scsi-SQEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part14', 'scsi-SQEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part15', 'scsi-SQEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part1', 'scsi-SQEMU_QEMU_HARDDISK_c6fc2347-eb32-4949-8ca3-7fc5e42443e4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:16.766294 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:16.766300 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:42:16.766307 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:16.766314 | orchestrator | 2026-02-05 04:42:16.766322 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 04:42:16.766329 | orchestrator | Thursday 05 February 2026 04:42:09 +0000 (0:00:02.562) 0:01:54.810 ***** 2026-02-05 04:42:16.766335 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:42:16.766342 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:42:16.766348 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:42:16.766354 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:42:16.766359 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:42:16.766365 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:42:16.766371 | orchestrator | ok: [testbed-manager] 2026-02-05 04:42:16.766376 | orchestrator | 2026-02-05 04:42:16.766382 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 04:42:16.766394 | orchestrator | Thursday 05 February 2026 04:42:12 +0000 (0:00:02.434) 0:01:57.245 ***** 2026-02-05 04:42:16.766400 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:42:16.766405 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:42:16.766410 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:42:16.766417 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:42:16.766423 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:42:16.766428 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:42:16.766434 | orchestrator | ok: [testbed-manager] 2026-02-05 04:42:16.766440 | orchestrator | 2026-02-05 04:42:16.766446 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 04:42:16.766452 | orchestrator | Thursday 05 February 2026 04:42:14 +0000 (0:00:01.872) 0:01:59.118 ***** 2026-02-05 04:42:16.766457 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:42:16.766463 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:42:16.766468 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:42:16.766474 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:42:16.766480 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:16.766485 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:42:16.766491 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:42:16.766496 | orchestrator | 2026-02-05 04:42:16.766502 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 04:42:16.766513 | orchestrator | Thursday 05 February 2026 04:42:16 +0000 (0:00:02.451) 0:02:01.569 ***** 2026-02-05 04:42:45.143185 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:42:45.143276 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:42:45.143286 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:42:45.143293 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:45.143299 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:45.143305 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:45.143312 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:45.143318 | orchestrator | 2026-02-05 04:42:45.143326 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 04:42:45.143334 | orchestrator | Thursday 05 February 2026 04:42:18 +0000 (0:00:01.836) 0:02:03.406 ***** 2026-02-05 04:42:45.143340 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:42:45.143359 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:42:45.143366 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:42:45.143372 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:45.143378 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:45.143384 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:45.143390 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-05 04:42:45.143396 | orchestrator | 2026-02-05 04:42:45.143402 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 04:42:45.143408 | orchestrator | Thursday 05 February 2026 04:42:21 +0000 (0:00:02.554) 0:02:05.960 ***** 2026-02-05 04:42:45.143414 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:42:45.143420 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:42:45.143426 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:42:45.143432 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:45.143438 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:45.143444 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:45.143450 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:45.143455 | orchestrator | 2026-02-05 04:42:45.143461 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 04:42:45.143467 | orchestrator | Thursday 05 February 2026 04:42:23 +0000 (0:00:01.925) 0:02:07.886 ***** 2026-02-05 04:42:45.143474 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:42:45.143480 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-05 04:42:45.143486 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 04:42:45.143492 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-05 04:42:45.143498 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 04:42:45.143504 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-05 04:42:45.143535 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 04:42:45.143542 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-05 04:42:45.143547 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-05 04:42:45.143553 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 04:42:45.143559 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-05 04:42:45.143565 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-05 04:42:45.143570 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-05 04:42:45.143576 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-05 04:42:45.143582 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-05 04:42:45.143588 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-05 04:42:45.143594 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-05 04:42:45.143600 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-05 04:42:45.143606 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-05 04:42:45.143611 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-05 04:42:45.143617 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-05 04:42:45.143623 | orchestrator | 2026-02-05 04:42:45.143629 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 04:42:45.143635 | orchestrator | Thursday 05 February 2026 04:42:26 +0000 (0:00:02.939) 0:02:10.825 ***** 2026-02-05 04:42:45.143641 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 04:42:45.143648 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 04:42:45.143653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 04:42:45.143659 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:42:45.143665 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 04:42:45.143671 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 04:42:45.143678 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 04:42:45.143685 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:42:45.143692 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 04:42:45.143699 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 04:42:45.143706 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 04:42:45.143713 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:42:45.143719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 04:42:45.143726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 04:42:45.143733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 04:42:45.143740 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:45.143746 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 04:42:45.143753 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 04:42:45.143759 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 04:42:45.143766 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:45.143773 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 04:42:45.143779 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 04:42:45.143786 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 04:42:45.143793 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:45.143812 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-05 04:42:45.143820 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-05 04:42:45.143827 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-05 04:42:45.143833 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:45.143839 | orchestrator | 2026-02-05 04:42:45.143846 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 04:42:45.143858 | orchestrator | Thursday 05 February 2026 04:42:27 +0000 (0:00:01.913) 0:02:12.739 ***** 2026-02-05 04:42:45.143865 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:42:45.143871 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:42:45.143878 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:42:45.143888 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:42:45.143896 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 04:42:45.143904 | orchestrator | 2026-02-05 04:42:45.143910 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 04:42:45.143918 | orchestrator | Thursday 05 February 2026 04:42:29 +0000 (0:00:01.931) 0:02:14.670 ***** 2026-02-05 04:42:45.143924 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:45.143930 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:45.143936 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:45.143941 | orchestrator | 2026-02-05 04:42:45.143947 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 04:42:45.143953 | orchestrator | Thursday 05 February 2026 04:42:31 +0000 (0:00:01.328) 0:02:15.999 ***** 2026-02-05 04:42:45.143962 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:45.143973 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:45.143979 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:45.143985 | orchestrator | 2026-02-05 04:42:45.143991 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 04:42:45.143997 | orchestrator | Thursday 05 February 2026 04:42:32 +0000 (0:00:01.364) 0:02:17.363 ***** 2026-02-05 04:42:45.144002 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:45.144008 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:42:45.144014 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:42:45.144097 | orchestrator | 2026-02-05 04:42:45.144110 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 04:42:45.144120 | orchestrator | Thursday 05 February 2026 04:42:33 +0000 (0:00:01.353) 0:02:18.717 ***** 2026-02-05 04:42:45.144131 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:42:45.144140 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:42:45.144150 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:42:45.144156 | orchestrator | 2026-02-05 04:42:45.144162 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 04:42:45.144168 | orchestrator | Thursday 05 February 2026 04:42:35 +0000 (0:00:01.398) 0:02:20.116 ***** 2026-02-05 04:42:45.144174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 04:42:45.144180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 04:42:45.144186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 04:42:45.144194 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:45.144204 | orchestrator | 2026-02-05 04:42:45.144214 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 04:42:45.144224 | orchestrator | Thursday 05 February 2026 04:42:36 +0000 (0:00:01.374) 0:02:21.491 ***** 2026-02-05 04:42:45.144230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 04:42:45.144236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 04:42:45.144242 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 04:42:45.144247 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:45.144253 | orchestrator | 2026-02-05 04:42:45.144259 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 04:42:45.144264 | orchestrator | Thursday 05 February 2026 04:42:38 +0000 (0:00:01.648) 0:02:23.139 ***** 2026-02-05 04:42:45.144270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 04:42:45.144276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 04:42:45.144288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 04:42:45.144294 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:42:45.144300 | orchestrator | 2026-02-05 04:42:45.144306 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 04:42:45.144312 | orchestrator | Thursday 05 February 2026 04:42:39 +0000 (0:00:01.587) 0:02:24.726 ***** 2026-02-05 04:42:45.144317 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:42:45.144323 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:42:45.144329 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:42:45.144334 | orchestrator | 2026-02-05 04:42:45.144340 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 04:42:45.144346 | orchestrator | Thursday 05 February 2026 04:42:41 +0000 (0:00:01.846) 0:02:26.573 ***** 2026-02-05 04:42:45.144352 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 04:42:45.144358 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 04:42:45.144364 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 04:42:45.144370 | orchestrator | 2026-02-05 04:42:45.144376 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 04:42:45.144381 | orchestrator | Thursday 05 February 2026 04:42:43 +0000 (0:00:01.588) 0:02:28.162 ***** 2026-02-05 04:42:45.144387 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:42:45.144393 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:42:45.144400 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:42:45.144406 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 04:42:45.144417 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 04:43:33.318577 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 04:43:33.318684 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 04:43:33.318694 | orchestrator | 2026-02-05 04:43:33.318702 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 04:43:33.318709 | orchestrator | Thursday 05 February 2026 04:42:45 +0000 (0:00:01.787) 0:02:29.950 ***** 2026-02-05 04:43:33.318717 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:43:33.318737 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:43:33.318743 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:43:33.318749 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 04:43:33.318754 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 04:43:33.318761 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 04:43:33.318767 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 04:43:33.318773 | orchestrator | 2026-02-05 04:43:33.318780 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-05 04:43:33.318786 | orchestrator | Thursday 05 February 2026 04:42:48 +0000 (0:00:02.872) 0:02:32.822 ***** 2026-02-05 04:43:33.318792 | orchestrator | changed: [testbed-node-3] 2026-02-05 04:43:33.318799 | orchestrator | changed: [testbed-node-5] 2026-02-05 04:43:33.318804 | orchestrator | changed: [testbed-node-4] 2026-02-05 04:43:33.318811 | orchestrator | changed: [testbed-manager] 2026-02-05 04:43:33.318817 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:43:33.318824 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:43:33.318829 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:43:33.318835 | orchestrator | 2026-02-05 04:43:33.318841 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-05 04:43:33.318847 | orchestrator | Thursday 05 February 2026 04:42:59 +0000 (0:00:11.636) 0:02:44.459 ***** 2026-02-05 04:43:33.318873 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.318879 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.318886 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.318893 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.318898 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.318903 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.318909 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.318914 | orchestrator | 2026-02-05 04:43:33.318920 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-05 04:43:33.318925 | orchestrator | Thursday 05 February 2026 04:43:01 +0000 (0:00:01.970) 0:02:46.430 ***** 2026-02-05 04:43:33.318931 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.318937 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.318942 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.318949 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.318955 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.318961 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.318967 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.318973 | orchestrator | 2026-02-05 04:43:33.318978 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-05 04:43:33.318985 | orchestrator | Thursday 05 February 2026 04:43:03 +0000 (0:00:01.853) 0:02:48.284 ***** 2026-02-05 04:43:33.318991 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.318997 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:43:33.319002 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:43:33.319008 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:43:33.319013 | orchestrator | changed: [testbed-node-3] 2026-02-05 04:43:33.319018 | orchestrator | changed: [testbed-node-4] 2026-02-05 04:43:33.319072 | orchestrator | changed: [testbed-node-5] 2026-02-05 04:43:33.319082 | orchestrator | 2026-02-05 04:43:33.319089 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-05 04:43:33.319095 | orchestrator | Thursday 05 February 2026 04:43:06 +0000 (0:00:02.945) 0:02:51.229 ***** 2026-02-05 04:43:33.319102 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-05 04:43:33.319109 | orchestrator | 2026-02-05 04:43:33.319116 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-05 04:43:33.319121 | orchestrator | Thursday 05 February 2026 04:43:09 +0000 (0:00:02.827) 0:02:54.056 ***** 2026-02-05 04:43:33.319128 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.319134 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.319140 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.319146 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.319153 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.319159 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.319165 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.319172 | orchestrator | 2026-02-05 04:43:33.319178 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-05 04:43:33.319185 | orchestrator | Thursday 05 February 2026 04:43:11 +0000 (0:00:01.911) 0:02:55.968 ***** 2026-02-05 04:43:33.319191 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.319197 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.319204 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.319210 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.319217 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.319223 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.319229 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.319236 | orchestrator | 2026-02-05 04:43:33.319242 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-05 04:43:33.319250 | orchestrator | Thursday 05 February 2026 04:43:13 +0000 (0:00:02.032) 0:02:58.000 ***** 2026-02-05 04:43:33.319265 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.319287 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.319292 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.319297 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.319301 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.319306 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.319310 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.319314 | orchestrator | 2026-02-05 04:43:33.319319 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-05 04:43:33.319324 | orchestrator | Thursday 05 February 2026 04:43:15 +0000 (0:00:01.873) 0:02:59.874 ***** 2026-02-05 04:43:33.319328 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.319331 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.319335 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.319345 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.319349 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.319353 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.319357 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.319363 | orchestrator | 2026-02-05 04:43:33.319369 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-05 04:43:33.319374 | orchestrator | Thursday 05 February 2026 04:43:17 +0000 (0:00:02.117) 0:03:01.991 ***** 2026-02-05 04:43:33.319380 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.319385 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.319392 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.319398 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.319403 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.319409 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.319416 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.319422 | orchestrator | 2026-02-05 04:43:33.319428 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-05 04:43:33.319435 | orchestrator | Thursday 05 February 2026 04:43:19 +0000 (0:00:01.913) 0:03:03.905 ***** 2026-02-05 04:43:33.319441 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.319448 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.319454 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.319460 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.319466 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.319472 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.319478 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.319482 | orchestrator | 2026-02-05 04:43:33.319486 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-05 04:43:33.319490 | orchestrator | Thursday 05 February 2026 04:43:21 +0000 (0:00:02.115) 0:03:06.020 ***** 2026-02-05 04:43:33.319494 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.319497 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.319501 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.319505 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.319509 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.319512 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.319516 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.319520 | orchestrator | 2026-02-05 04:43:33.319524 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-05 04:43:33.319528 | orchestrator | Thursday 05 February 2026 04:43:23 +0000 (0:00:01.918) 0:03:07.939 ***** 2026-02-05 04:43:33.319531 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.319535 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.319539 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.319542 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.319546 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.319550 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.319554 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.319557 | orchestrator | 2026-02-05 04:43:33.319566 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-05 04:43:33.319570 | orchestrator | Thursday 05 February 2026 04:43:25 +0000 (0:00:02.162) 0:03:10.101 ***** 2026-02-05 04:43:33.319574 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.319578 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.319581 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.319585 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.319589 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.319593 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.319596 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.319600 | orchestrator | 2026-02-05 04:43:33.319604 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-05 04:43:33.319608 | orchestrator | Thursday 05 February 2026 04:43:27 +0000 (0:00:02.122) 0:03:12.223 ***** 2026-02-05 04:43:33.319611 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.319615 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.319619 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.319623 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.319626 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.319630 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.319634 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.319638 | orchestrator | 2026-02-05 04:43:33.319641 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-05 04:43:33.319645 | orchestrator | Thursday 05 February 2026 04:43:29 +0000 (0:00:02.021) 0:03:14.245 ***** 2026-02-05 04:43:33.319649 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.319652 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.319656 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.319660 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.319664 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.319667 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.319671 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.319675 | orchestrator | 2026-02-05 04:43:33.319679 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-05 04:43:33.319682 | orchestrator | Thursday 05 February 2026 04:43:31 +0000 (0:00:01.985) 0:03:16.230 ***** 2026-02-05 04:43:33.319686 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:33.319690 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:33.319694 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:33.319698 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:33.319701 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:33.319705 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:33.319709 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:33.319712 | orchestrator | 2026-02-05 04:43:33.319721 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-05 04:43:54.402459 | orchestrator | Thursday 05 February 2026 04:43:33 +0000 (0:00:01.894) 0:03:18.125 ***** 2026-02-05 04:43:54.402581 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:54.402597 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:54.402606 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:54.402616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 04:43:54.402643 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 04:43:54.402652 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:54.402661 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 04:43:54.402670 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 04:43:54.402740 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:54.402757 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 04:43:54.402772 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 04:43:54.402787 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:54.402803 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:54.402818 | orchestrator | 2026-02-05 04:43:54.402832 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-05 04:43:54.402841 | orchestrator | Thursday 05 February 2026 04:43:35 +0000 (0:00:02.062) 0:03:20.187 ***** 2026-02-05 04:43:54.402850 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:54.402859 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:54.402867 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:54.402876 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:54.402884 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:54.402893 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:54.402901 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:54.402910 | orchestrator | 2026-02-05 04:43:54.402919 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-05 04:43:54.402927 | orchestrator | Thursday 05 February 2026 04:43:37 +0000 (0:00:01.876) 0:03:22.063 ***** 2026-02-05 04:43:54.402936 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:54.402945 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:54.402953 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:54.402961 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:54.402970 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:54.402978 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:54.402988 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:54.403002 | orchestrator | 2026-02-05 04:43:54.403014 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-05 04:43:54.403024 | orchestrator | Thursday 05 February 2026 04:43:39 +0000 (0:00:02.179) 0:03:24.243 ***** 2026-02-05 04:43:54.403091 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:54.403101 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:54.403111 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:54.403121 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:54.403131 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:54.403140 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:54.403150 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:54.403160 | orchestrator | 2026-02-05 04:43:54.403170 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-05 04:43:54.403180 | orchestrator | Thursday 05 February 2026 04:43:41 +0000 (0:00:01.879) 0:03:26.123 ***** 2026-02-05 04:43:54.403191 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:54.403207 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:54.403218 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:54.403228 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:54.403238 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:54.403248 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:54.403258 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:54.403268 | orchestrator | 2026-02-05 04:43:54.403278 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-05 04:43:54.403288 | orchestrator | Thursday 05 February 2026 04:43:43 +0000 (0:00:02.270) 0:03:28.394 ***** 2026-02-05 04:43:54.403298 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:54.403308 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:54.403318 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:54.403328 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:54.403338 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:54.403357 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:54.403366 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:54.403376 | orchestrator | 2026-02-05 04:43:54.403385 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-05 04:43:54.403393 | orchestrator | Thursday 05 February 2026 04:43:46 +0000 (0:00:02.434) 0:03:30.828 ***** 2026-02-05 04:43:54.403422 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:54.403461 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:54.403476 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:54.403488 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:54.403501 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:54.403514 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:54.403527 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:54.403540 | orchestrator | 2026-02-05 04:43:54.403554 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-05 04:43:54.403567 | orchestrator | Thursday 05 February 2026 04:43:48 +0000 (0:00:02.099) 0:03:32.928 ***** 2026-02-05 04:43:54.403605 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:43:54.403622 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:43:54.403637 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:43:54.403652 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:43:54.403668 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 04:43:54.403678 | orchestrator | 2026-02-05 04:43:54.403687 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-05 04:43:54.403703 | orchestrator | Thursday 05 February 2026 04:43:50 +0000 (0:00:02.140) 0:03:35.068 ***** 2026-02-05 04:43:54.403716 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:43:54.403737 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:43:54.403757 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:43:54.403771 | orchestrator | 2026-02-05 04:43:54.403786 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-05 04:43:54.403799 | orchestrator | Thursday 05 February 2026 04:43:51 +0000 (0:00:01.283) 0:03:36.352 ***** 2026-02-05 04:43:54.403814 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 04:43:54.403826 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 04:43:54.403840 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:54.403855 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 04:43:54.403869 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 04:43:54.403884 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:54.403899 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 04:43:54.403913 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 04:43:54.403928 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:54.403942 | orchestrator | 2026-02-05 04:43:54.403951 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-05 04:43:54.403960 | orchestrator | Thursday 05 February 2026 04:43:52 +0000 (0:00:01.334) 0:03:37.687 ***** 2026-02-05 04:43:54.403971 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'}, 'ansible_loop_var': 'item'})  2026-02-05 04:43:54.403992 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'}, 'ansible_loop_var': 'item'})  2026-02-05 04:43:54.404001 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:43:54.404010 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'}, 'ansible_loop_var': 'item'})  2026-02-05 04:43:54.404019 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}, 'ansible_loop_var': 'item'})  2026-02-05 04:43:54.404050 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:43:54.404066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'}, 'ansible_loop_var': 'item'})  2026-02-05 04:43:54.404082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'}, 'ansible_loop_var': 'item'})  2026-02-05 04:43:54.404091 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:43:54.404100 | orchestrator | 2026-02-05 04:43:54.404117 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-05 04:44:03.554825 | orchestrator | Thursday 05 February 2026 04:43:54 +0000 (0:00:01.520) 0:03:39.207 ***** 2026-02-05 04:44:03.554922 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:03.554934 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:03.554943 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:03.554952 | orchestrator | 2026-02-05 04:44:03.554960 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-05 04:44:03.554969 | orchestrator | Thursday 05 February 2026 04:43:55 +0000 (0:00:01.311) 0:03:40.518 ***** 2026-02-05 04:44:03.554993 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:03.555002 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:03.555010 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:03.555018 | orchestrator | 2026-02-05 04:44:03.555026 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-05 04:44:03.555061 | orchestrator | Thursday 05 February 2026 04:43:57 +0000 (0:00:01.354) 0:03:41.873 ***** 2026-02-05 04:44:03.555069 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:03.555077 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:03.555085 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:03.555093 | orchestrator | 2026-02-05 04:44:03.555101 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-05 04:44:03.555109 | orchestrator | Thursday 05 February 2026 04:43:58 +0000 (0:00:01.334) 0:03:43.207 ***** 2026-02-05 04:44:03.555118 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:03.555126 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:03.555134 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:03.555142 | orchestrator | 2026-02-05 04:44:03.555150 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-05 04:44:03.555158 | orchestrator | Thursday 05 February 2026 04:43:59 +0000 (0:00:01.369) 0:03:44.577 ***** 2026-02-05 04:44:03.555187 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'}) 2026-02-05 04:44:03.555197 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'}) 2026-02-05 04:44:03.555205 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'}) 2026-02-05 04:44:03.555213 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'}) 2026-02-05 04:44:03.555221 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}) 2026-02-05 04:44:03.555229 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'}) 2026-02-05 04:44:03.555246 | orchestrator | 2026-02-05 04:44:03.555254 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-05 04:44:03.555264 | orchestrator | Thursday 05 February 2026 04:44:02 +0000 (0:00:02.333) 0:03:46.911 ***** 2026-02-05 04:44:03.555276 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-de37fca4-ea41-596c-ab1a-50038d0e278e/osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1770258987.1951907, 'mtime': 1770258987.1891906, 'ctime': 1770258987.1891906, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-de37fca4-ea41-596c-ab1a-50038d0e278e/osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:03.555310 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567/osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1770259006.0285268, 'mtime': 1770259006.0245268, 'ctime': 1770259006.0245268, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567/osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:03.555327 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:03.555337 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-599b5b3c-37df-591b-a248-24d26d466625/osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1770258987.9703526, 'mtime': 1770258987.9663527, 'ctime': 1770258987.9663527, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-599b5b3c-37df-591b-a248-24d26d466625/osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:03.555346 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c/osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1770259006.9806998, 'mtime': 1770259006.9776995, 'ctime': 1770259006.9776995, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c/osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:03.555355 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:03.555374 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-27670a2c-7838-5627-a951-e8a6d97fe4be/osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1770258984.9619248, 'mtime': 1770258984.9579248, 'ctime': 1770258984.9579248, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-27670a2c-7838-5627-a951-e8a6d97fe4be/osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391002 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-51c61bf5-abad-542f-be8e-c69d5e860565/osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1770259004.421276, 'mtime': 1770259004.418276, 'ctime': 1770259004.418276, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-51c61bf5-abad-542f-be8e-c69d5e860565/osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391152 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:09.391171 | orchestrator | 2026-02-05 04:44:09.391184 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-05 04:44:09.391196 | orchestrator | Thursday 05 February 2026 04:44:03 +0000 (0:00:01.449) 0:03:48.361 ***** 2026-02-05 04:44:09.391207 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 04:44:09.391218 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 04:44:09.391228 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:09.391238 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 04:44:09.391249 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 04:44:09.391259 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:09.391268 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 04:44:09.391278 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 04:44:09.391288 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:09.391298 | orchestrator | 2026-02-05 04:44:09.391308 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-05 04:44:09.391318 | orchestrator | Thursday 05 February 2026 04:44:04 +0000 (0:00:01.331) 0:03:49.692 ***** 2026-02-05 04:44:09.391330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391376 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:09.391401 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391430 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391440 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:09.391451 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391461 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391470 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:09.391480 | orchestrator | 2026-02-05 04:44:09.391490 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-05 04:44:09.391500 | orchestrator | Thursday 05 February 2026 04:44:06 +0000 (0:00:01.450) 0:03:51.143 ***** 2026-02-05 04:44:09.391510 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'})  2026-02-05 04:44:09.391521 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'})  2026-02-05 04:44:09.391533 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:09.391546 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'})  2026-02-05 04:44:09.391558 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'})  2026-02-05 04:44:09.391569 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:09.391580 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'})  2026-02-05 04:44:09.391592 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'})  2026-02-05 04:44:09.391603 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:09.391614 | orchestrator | 2026-02-05 04:44:09.391626 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-05 04:44:09.391637 | orchestrator | Thursday 05 February 2026 04:44:07 +0000 (0:00:01.576) 0:03:52.719 ***** 2026-02-05 04:44:09.391649 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-de37fca4-ea41-596c-ab1a-50038d0e278e', 'data_vg': 'ceph-de37fca4-ea41-596c-ab1a-50038d0e278e'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-825a1c54-3e62-51fa-b7a4-9af3e8833567', 'data_vg': 'ceph-825a1c54-3e62-51fa-b7a4-9af3e8833567'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391680 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:09.391692 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-599b5b3c-37df-591b-a248-24d26d466625', 'data_vg': 'ceph-599b5b3c-37df-591b-a248-24d26d466625'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391703 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c', 'data_vg': 'ceph-f66c2ad0-d8eb-5a81-b3e8-9df8f695bb6c'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391715 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:09.391731 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-27670a2c-7838-5627-a951-e8a6d97fe4be', 'data_vg': 'ceph-27670a2c-7838-5627-a951-e8a6d97fe4be'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:09.391749 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-51c61bf5-abad-542f-be8e-c69d5e860565', 'data_vg': 'ceph-51c61bf5-abad-542f-be8e-c69d5e860565'}, 'ansible_loop_var': 'item'})  2026-02-05 04:44:18.553191 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:18.553278 | orchestrator | 2026-02-05 04:44:18.553288 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-05 04:44:18.553296 | orchestrator | Thursday 05 February 2026 04:44:09 +0000 (0:00:01.476) 0:03:54.196 ***** 2026-02-05 04:44:18.553303 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:44:18.553309 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:44:18.553315 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:44:18.553321 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:18.553327 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:18.553333 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:18.553338 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:44:18.553344 | orchestrator | 2026-02-05 04:44:18.553350 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-05 04:44:18.553357 | orchestrator | Thursday 05 February 2026 04:44:11 +0000 (0:00:01.815) 0:03:56.011 ***** 2026-02-05 04:44:18.553363 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:44:18.553369 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:44:18.553375 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:44:18.553381 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:44:18.553387 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 04:44:18.553393 | orchestrator | 2026-02-05 04:44:18.553399 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-05 04:44:18.553405 | orchestrator | Thursday 05 February 2026 04:44:13 +0000 (0:00:02.426) 0:03:58.437 ***** 2026-02-05 04:44:18.553411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553463 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:18.553469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553499 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:18.553504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553533 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:18.553539 | orchestrator | 2026-02-05 04:44:18.553545 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-05 04:44:18.553551 | orchestrator | Thursday 05 February 2026 04:44:15 +0000 (0:00:01.411) 0:03:59.848 ***** 2026-02-05 04:44:18.553569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553611 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:18.553617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553646 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:18.553652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553685 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:18.553691 | orchestrator | 2026-02-05 04:44:18.553697 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-05 04:44:18.553703 | orchestrator | Thursday 05 February 2026 04:44:16 +0000 (0:00:01.635) 0:04:01.484 ***** 2026-02-05 04:44:18.553709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553738 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:18.553744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553778 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:18.553785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 04:44:18.553823 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:18.553830 | orchestrator | 2026-02-05 04:44:18.553837 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-05 04:44:18.553844 | orchestrator | Thursday 05 February 2026 04:44:18 +0000 (0:00:01.467) 0:04:02.951 ***** 2026-02-05 04:44:18.553851 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:44:18.553861 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:44:18.553872 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:44:32.766992 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:32.767134 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:32.767153 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:32.767166 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:44:32.767179 | orchestrator | 2026-02-05 04:44:32.767194 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-05 04:44:32.767208 | orchestrator | Thursday 05 February 2026 04:44:19 +0000 (0:00:01.797) 0:04:04.749 ***** 2026-02-05 04:44:32.767221 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:44:32.767233 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:44:32.767247 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:44:32.767260 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:32.767273 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:32.767286 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:32.767299 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:44:32.767312 | orchestrator | 2026-02-05 04:44:32.767324 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-05 04:44:32.767337 | orchestrator | Thursday 05 February 2026 04:44:21 +0000 (0:00:02.038) 0:04:06.787 ***** 2026-02-05 04:44:32.767350 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:44:32.767362 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:44:32.767376 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:44:32.767390 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:32.767403 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:32.767416 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:32.767431 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:44:32.767440 | orchestrator | 2026-02-05 04:44:32.767448 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-05 04:44:32.767458 | orchestrator | Thursday 05 February 2026 04:44:24 +0000 (0:00:02.039) 0:04:08.827 ***** 2026-02-05 04:44:32.767466 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:44:32.767474 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:44:32.767482 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:44:32.767490 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:32.767498 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:32.767506 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:32.767513 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:44:32.767521 | orchestrator | 2026-02-05 04:44:32.767531 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-05 04:44:32.767542 | orchestrator | Thursday 05 February 2026 04:44:25 +0000 (0:00:01.850) 0:04:10.677 ***** 2026-02-05 04:44:32.767551 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:44:32.767560 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:44:32.767569 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:44:32.767590 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:32.767607 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:32.767616 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:32.767625 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:44:32.767634 | orchestrator | 2026-02-05 04:44:32.767644 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-05 04:44:32.767653 | orchestrator | Thursday 05 February 2026 04:44:27 +0000 (0:00:01.987) 0:04:12.665 ***** 2026-02-05 04:44:32.767662 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:44:32.767671 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:44:32.767680 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:44:32.767689 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:32.767699 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:32.767708 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:32.767722 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:44:32.767733 | orchestrator | 2026-02-05 04:44:32.767777 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-05 04:44:32.767796 | orchestrator | Thursday 05 February 2026 04:44:29 +0000 (0:00:01.998) 0:04:14.663 ***** 2026-02-05 04:44:32.767809 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:44:32.767822 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:44:32.767835 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:44:32.767848 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:32.767863 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:32.767877 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:32.767890 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:44:32.767902 | orchestrator | 2026-02-05 04:44:32.767916 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-05 04:44:32.767930 | orchestrator | Thursday 05 February 2026 04:44:31 +0000 (0:00:02.079) 0:04:16.743 ***** 2026-02-05 04:44:32.767945 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:44:32.767960 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:44:32.767976 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:44:32.768002 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:44:32.768011 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:44:32.768021 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:44:32.768030 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:44:32.768097 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:44:32.768106 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:44:32.768114 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:44:32.768123 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:44:32.768131 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:44:32.768139 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:44:32.768147 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:44:32.768155 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:44:32.768163 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:44:32.768182 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:44:32.768190 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:44:32.768271 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:44:32.768282 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:44:32.768290 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:44:32.768299 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:44:32.768307 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:44:32.768314 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:44:32.768322 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:44:32.768330 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:44:32.768339 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:44:32.768352 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:44:32.768361 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:44:32.768369 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:44:32.768384 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:44:36.932779 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:44:36.932887 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:44:36.932913 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:44:36.932935 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:44:36.932954 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:36.932973 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:36.932985 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:44:36.933024 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:44:36.933125 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:44:36.933149 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:44:36.933208 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:44:36.933222 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:44:36.933233 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:44:36.933244 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:44:36.933256 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:44:36.933267 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:44:36.933278 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:44:36.933289 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:36.933300 | orchestrator | 2026-02-05 04:44:36.933312 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-05 04:44:36.933327 | orchestrator | Thursday 05 February 2026 04:44:34 +0000 (0:00:02.183) 0:04:18.926 ***** 2026-02-05 04:44:36.933339 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:44:36.933352 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:44:36.933365 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:44:36.933378 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:44:36.933390 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:44:36.933402 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:44:36.933414 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:44:36.933427 | orchestrator | 2026-02-05 04:44:36.933439 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-05 04:44:36.933451 | orchestrator | Thursday 05 February 2026 04:44:36 +0000 (0:00:02.005) 0:04:20.932 ***** 2026-02-05 04:44:36.933478 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:44:36.933490 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:44:36.933501 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:44:36.933531 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:44:36.933543 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:44:36.933564 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:44:36.933576 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:44:36.933587 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:44:36.933598 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:44:36.933609 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:44:36.933620 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:44:36.933631 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:44:36.933642 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:44:36.933653 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:44:36.933664 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:44:36.933675 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:44:36.933686 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:44:36.933697 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:44:36.933708 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:44:36.933719 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:44:36.933730 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:44:36.933741 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:44:36.933751 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:44:36.933762 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:44:36.933773 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:44:36.933789 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:44:36.933807 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:44:36.933818 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:44:36.933837 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:45:06.340213 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:45:06.340393 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:45:06.340437 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:45:06.340457 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:45:06.340479 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:45:06.340500 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:45:06.340520 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-05 04:45:06.340538 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-05 04:45:06.340556 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:45:06.340575 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:45:06.340593 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-05 04:45:06.340612 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:45:06.340629 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:45:06.340647 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:45:06.340667 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:45:06.340685 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:45:06.340705 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-05 04:45:06.340725 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-05 04:45:06.340782 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-05 04:45:06.340803 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:45:06.340822 | orchestrator | 2026-02-05 04:45:06.340844 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-05 04:45:06.340865 | orchestrator | Thursday 05 February 2026 04:44:38 +0000 (0:00:02.142) 0:04:23.075 ***** 2026-02-05 04:45:06.340883 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:45:06.340922 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:45:06.340943 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:45:06.340962 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:45:06.340983 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:45:06.341001 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:45:06.341019 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:45:06.341127 | orchestrator | 2026-02-05 04:45:06.341152 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-05 04:45:06.341173 | orchestrator | Thursday 05 February 2026 04:44:40 +0000 (0:00:02.057) 0:04:25.132 ***** 2026-02-05 04:45:06.341191 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:45:06.341208 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:45:06.341226 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:45:06.341244 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:45:06.341262 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:45:06.341279 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:45:06.341297 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:45:06.341317 | orchestrator | 2026-02-05 04:45:06.341335 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-05 04:45:06.341385 | orchestrator | Thursday 05 February 2026 04:44:42 +0000 (0:00:02.035) 0:04:27.168 ***** 2026-02-05 04:45:06.341405 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:45:06.341423 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:45:06.341441 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:45:06.341460 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:45:06.341477 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:45:06.341495 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:45:06.341514 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:45:06.341533 | orchestrator | 2026-02-05 04:45:06.341551 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-05 04:45:06.341569 | orchestrator | Thursday 05 February 2026 04:44:44 +0000 (0:00:02.443) 0:04:29.612 ***** 2026-02-05 04:45:06.341587 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-05 04:45:06.341607 | orchestrator | 2026-02-05 04:45:06.341626 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-05 04:45:06.341645 | orchestrator | Thursday 05 February 2026 04:44:47 +0000 (0:00:02.719) 0:04:32.331 ***** 2026-02-05 04:45:06.341663 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-05 04:45:06.341682 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-05 04:45:06.341701 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-05 04:45:06.341718 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-05 04:45:06.341737 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-05 04:45:06.341755 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-05 04:45:06.341773 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-05 04:45:06.341812 | orchestrator | 2026-02-05 04:45:06.341833 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-05 04:45:06.341850 | orchestrator | Thursday 05 February 2026 04:44:50 +0000 (0:00:02.728) 0:04:35.059 ***** 2026-02-05 04:45:06.341867 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:45:06.341884 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:45:06.341901 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:45:06.341921 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:45:06.341941 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:45:06.341960 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:45:06.341978 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:45:06.341996 | orchestrator | 2026-02-05 04:45:06.342013 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-05 04:45:06.342142 | orchestrator | Thursday 05 February 2026 04:44:52 +0000 (0:00:02.374) 0:04:37.433 ***** 2026-02-05 04:45:06.342162 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:45:06.342180 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:45:06.342193 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:45:06.342204 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:45:06.342220 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:45:06.342238 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:45:06.342250 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:45:06.342261 | orchestrator | 2026-02-05 04:45:06.342272 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-05 04:45:06.342283 | orchestrator | Thursday 05 February 2026 04:44:54 +0000 (0:00:01.976) 0:04:39.410 ***** 2026-02-05 04:45:06.342294 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:06.342306 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:45:06.342317 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:45:06.342328 | orchestrator | ok: [testbed-node-3] 2026-02-05 04:45:06.342339 | orchestrator | ok: [testbed-node-4] 2026-02-05 04:45:06.342349 | orchestrator | ok: [testbed-node-5] 2026-02-05 04:45:06.342360 | orchestrator | ok: [testbed-manager] 2026-02-05 04:45:06.342371 | orchestrator | 2026-02-05 04:45:06.342382 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-05 04:45:06.342393 | orchestrator | Thursday 05 February 2026 04:44:57 +0000 (0:00:02.587) 0:04:41.997 ***** 2026-02-05 04:45:06.342404 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:45:06.342415 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:45:06.342426 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:45:06.342437 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:45:06.342448 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:45:06.342459 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:45:06.342470 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:45:06.342481 | orchestrator | 2026-02-05 04:45:06.342492 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-05 04:45:06.342515 | orchestrator | Thursday 05 February 2026 04:44:59 +0000 (0:00:02.231) 0:04:44.228 ***** 2026-02-05 04:45:06.342527 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:45:06.342538 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:45:06.342547 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:45:06.342557 | orchestrator | skipping: [testbed-node-3] 2026-02-05 04:45:06.342567 | orchestrator | skipping: [testbed-node-4] 2026-02-05 04:45:06.342576 | orchestrator | skipping: [testbed-node-5] 2026-02-05 04:45:06.342586 | orchestrator | skipping: [testbed-manager] 2026-02-05 04:45:06.342596 | orchestrator | 2026-02-05 04:45:06.342606 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-05 04:45:06.342615 | orchestrator | Thursday 05 February 2026 04:45:01 +0000 (0:00:02.123) 0:04:46.352 ***** 2026-02-05 04:45:06.342625 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:06.342635 | orchestrator | 2026-02-05 04:45:06.342645 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-05 04:45:06.342665 | orchestrator | Thursday 05 February 2026 04:45:04 +0000 (0:00:02.751) 0:04:49.103 ***** 2026-02-05 04:45:06.342675 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:45:06.342685 | orchestrator | 2026-02-05 04:45:06.342710 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-05 04:45:46.264326 | orchestrator | 2026-02-05 04:45:46.264411 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 04:45:46.264420 | orchestrator | Thursday 05 February 2026 04:45:06 +0000 (0:00:02.045) 0:04:51.149 ***** 2026-02-05 04:45:46.264426 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264432 | orchestrator | 2026-02-05 04:45:46.264438 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 04:45:46.264443 | orchestrator | Thursday 05 February 2026 04:45:07 +0000 (0:00:01.492) 0:04:52.642 ***** 2026-02-05 04:45:46.264449 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264454 | orchestrator | 2026-02-05 04:45:46.264459 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-05 04:45:46.264464 | orchestrator | Thursday 05 February 2026 04:45:08 +0000 (0:00:01.103) 0:04:53.746 ***** 2026-02-05 04:45:46.264471 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-05 04:45:46.264479 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-05 04:45:46.264484 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-05 04:45:46.264490 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-05 04:45:46.264496 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-05 04:45:46.264503 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}])  2026-02-05 04:45:46.264510 | orchestrator | 2026-02-05 04:45:46.264515 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-05 04:45:46.264520 | orchestrator | 2026-02-05 04:45:46.264526 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-05 04:45:46.264531 | orchestrator | Thursday 05 February 2026 04:45:20 +0000 (0:00:11.197) 0:05:04.943 ***** 2026-02-05 04:45:46.264552 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264557 | orchestrator | 2026-02-05 04:45:46.264562 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-05 04:45:46.264567 | orchestrator | Thursday 05 February 2026 04:45:21 +0000 (0:00:01.482) 0:05:06.426 ***** 2026-02-05 04:45:46.264578 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264584 | orchestrator | 2026-02-05 04:45:46.264589 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-05 04:45:46.264594 | orchestrator | Thursday 05 February 2026 04:45:22 +0000 (0:00:01.128) 0:05:07.554 ***** 2026-02-05 04:45:46.264599 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:45:46.264606 | orchestrator | 2026-02-05 04:45:46.264611 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-05 04:45:46.264616 | orchestrator | Thursday 05 February 2026 04:45:23 +0000 (0:00:01.111) 0:05:08.666 ***** 2026-02-05 04:45:46.264621 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264626 | orchestrator | 2026-02-05 04:45:46.264631 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 04:45:46.264636 | orchestrator | Thursday 05 February 2026 04:45:24 +0000 (0:00:01.128) 0:05:09.795 ***** 2026-02-05 04:45:46.264641 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-05 04:45:46.264647 | orchestrator | 2026-02-05 04:45:46.264652 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 04:45:46.264667 | orchestrator | Thursday 05 February 2026 04:45:26 +0000 (0:00:01.115) 0:05:10.910 ***** 2026-02-05 04:45:46.264673 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264678 | orchestrator | 2026-02-05 04:45:46.264683 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 04:45:46.264688 | orchestrator | Thursday 05 February 2026 04:45:27 +0000 (0:00:01.499) 0:05:12.410 ***** 2026-02-05 04:45:46.264693 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264698 | orchestrator | 2026-02-05 04:45:46.264703 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 04:45:46.264708 | orchestrator | Thursday 05 February 2026 04:45:28 +0000 (0:00:01.145) 0:05:13.555 ***** 2026-02-05 04:45:46.264714 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264719 | orchestrator | 2026-02-05 04:45:46.264724 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 04:45:46.264729 | orchestrator | Thursday 05 February 2026 04:45:30 +0000 (0:00:01.484) 0:05:15.040 ***** 2026-02-05 04:45:46.264734 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264739 | orchestrator | 2026-02-05 04:45:46.264744 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 04:45:46.264749 | orchestrator | Thursday 05 February 2026 04:45:31 +0000 (0:00:01.145) 0:05:16.185 ***** 2026-02-05 04:45:46.264754 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264759 | orchestrator | 2026-02-05 04:45:46.264764 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 04:45:46.264769 | orchestrator | Thursday 05 February 2026 04:45:32 +0000 (0:00:01.123) 0:05:17.309 ***** 2026-02-05 04:45:46.264775 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264780 | orchestrator | 2026-02-05 04:45:46.264785 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 04:45:46.264791 | orchestrator | Thursday 05 February 2026 04:45:33 +0000 (0:00:01.136) 0:05:18.445 ***** 2026-02-05 04:45:46.264797 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:45:46.264802 | orchestrator | 2026-02-05 04:45:46.264807 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 04:45:46.264812 | orchestrator | Thursday 05 February 2026 04:45:34 +0000 (0:00:01.121) 0:05:19.567 ***** 2026-02-05 04:45:46.264817 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264822 | orchestrator | 2026-02-05 04:45:46.264827 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 04:45:46.264832 | orchestrator | Thursday 05 February 2026 04:45:35 +0000 (0:00:01.131) 0:05:20.698 ***** 2026-02-05 04:45:46.264842 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:45:46.264848 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:45:46.264853 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:45:46.264858 | orchestrator | 2026-02-05 04:45:46.264863 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 04:45:46.264868 | orchestrator | Thursday 05 February 2026 04:45:37 +0000 (0:00:01.644) 0:05:22.343 ***** 2026-02-05 04:45:46.264873 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:45:46.264878 | orchestrator | 2026-02-05 04:45:46.264883 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 04:45:46.264889 | orchestrator | Thursday 05 February 2026 04:45:38 +0000 (0:00:01.237) 0:05:23.581 ***** 2026-02-05 04:45:46.264894 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:45:46.264900 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:45:46.264905 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:45:46.264911 | orchestrator | 2026-02-05 04:45:46.264917 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 04:45:46.264923 | orchestrator | Thursday 05 February 2026 04:45:41 +0000 (0:00:03.107) 0:05:26.689 ***** 2026-02-05 04:45:46.264929 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 04:45:46.264936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 04:45:46.264942 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 04:45:46.264948 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:45:46.264953 | orchestrator | 2026-02-05 04:45:46.264959 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 04:45:46.264965 | orchestrator | Thursday 05 February 2026 04:45:43 +0000 (0:00:01.360) 0:05:28.049 ***** 2026-02-05 04:45:46.264972 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 04:45:46.264983 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 04:45:46.264990 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 04:45:46.264996 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:45:46.265002 | orchestrator | 2026-02-05 04:45:46.265008 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 04:45:46.265014 | orchestrator | Thursday 05 February 2026 04:45:45 +0000 (0:00:01.824) 0:05:29.874 ***** 2026-02-05 04:45:46.265024 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:07.459847 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:07.459979 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:07.459997 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:07.460010 | orchestrator | 2026-02-05 04:46:07.460021 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 04:46:07.460033 | orchestrator | Thursday 05 February 2026 04:45:46 +0000 (0:00:01.198) 0:05:31.072 ***** 2026-02-05 04:46:07.460138 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'de37024be869', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 04:45:39.271066', 'end': '2026-02-05 04:45:39.324221', 'delta': '0:00:00.053155', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de37024be869'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 04:46:07.460150 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'df4012ab4a61', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 04:45:39.861736', 'end': '2026-02-05 04:45:39.908608', 'delta': '0:00:00.046872', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['df4012ab4a61'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 04:46:07.460170 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '458f6feaf079', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 04:45:40.693182', 'end': '2026-02-05 04:45:40.737172', 'delta': '0:00:00.043990', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['458f6feaf079'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 04:46:07.460177 | orchestrator | 2026-02-05 04:46:07.460184 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 04:46:07.460191 | orchestrator | Thursday 05 February 2026 04:45:47 +0000 (0:00:01.199) 0:05:32.272 ***** 2026-02-05 04:46:07.460197 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:46:07.460205 | orchestrator | 2026-02-05 04:46:07.460211 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 04:46:07.460217 | orchestrator | Thursday 05 February 2026 04:45:49 +0000 (0:00:01.626) 0:05:33.899 ***** 2026-02-05 04:46:07.460228 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:07.460238 | orchestrator | 2026-02-05 04:46:07.460248 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 04:46:07.460258 | orchestrator | Thursday 05 February 2026 04:45:50 +0000 (0:00:01.231) 0:05:35.130 ***** 2026-02-05 04:46:07.460268 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:46:07.460277 | orchestrator | 2026-02-05 04:46:07.460295 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 04:46:07.460305 | orchestrator | Thursday 05 February 2026 04:45:51 +0000 (0:00:01.156) 0:05:36.287 ***** 2026-02-05 04:46:07.460334 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-05 04:46:07.460346 | orchestrator | 2026-02-05 04:46:07.460357 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 04:46:07.460368 | orchestrator | Thursday 05 February 2026 04:45:54 +0000 (0:00:03.442) 0:05:39.730 ***** 2026-02-05 04:46:07.460379 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:46:07.460390 | orchestrator | 2026-02-05 04:46:07.460402 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 04:46:07.460413 | orchestrator | Thursday 05 February 2026 04:45:56 +0000 (0:00:01.109) 0:05:40.839 ***** 2026-02-05 04:46:07.460423 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:07.460434 | orchestrator | 2026-02-05 04:46:07.460445 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 04:46:07.460455 | orchestrator | Thursday 05 February 2026 04:45:57 +0000 (0:00:01.114) 0:05:41.954 ***** 2026-02-05 04:46:07.460466 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:07.460478 | orchestrator | 2026-02-05 04:46:07.460489 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 04:46:07.460500 | orchestrator | Thursday 05 February 2026 04:45:58 +0000 (0:00:01.233) 0:05:43.187 ***** 2026-02-05 04:46:07.460510 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:07.460519 | orchestrator | 2026-02-05 04:46:07.460531 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 04:46:07.460542 | orchestrator | Thursday 05 February 2026 04:45:59 +0000 (0:00:01.155) 0:05:44.343 ***** 2026-02-05 04:46:07.460552 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:07.460563 | orchestrator | 2026-02-05 04:46:07.460575 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 04:46:07.460586 | orchestrator | Thursday 05 February 2026 04:46:00 +0000 (0:00:01.142) 0:05:45.485 ***** 2026-02-05 04:46:07.460595 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:07.460603 | orchestrator | 2026-02-05 04:46:07.460610 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 04:46:07.460618 | orchestrator | Thursday 05 February 2026 04:46:01 +0000 (0:00:01.111) 0:05:46.597 ***** 2026-02-05 04:46:07.460625 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:07.460633 | orchestrator | 2026-02-05 04:46:07.460641 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 04:46:07.460647 | orchestrator | Thursday 05 February 2026 04:46:02 +0000 (0:00:01.094) 0:05:47.691 ***** 2026-02-05 04:46:07.460653 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:07.460660 | orchestrator | 2026-02-05 04:46:07.460666 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 04:46:07.460672 | orchestrator | Thursday 05 February 2026 04:46:03 +0000 (0:00:01.106) 0:05:48.798 ***** 2026-02-05 04:46:07.460678 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:07.460685 | orchestrator | 2026-02-05 04:46:07.460691 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 04:46:07.460698 | orchestrator | Thursday 05 February 2026 04:46:05 +0000 (0:00:01.107) 0:05:49.906 ***** 2026-02-05 04:46:07.460705 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:07.460711 | orchestrator | 2026-02-05 04:46:07.460717 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 04:46:07.460724 | orchestrator | Thursday 05 February 2026 04:46:06 +0000 (0:00:01.123) 0:05:51.030 ***** 2026-02-05 04:46:07.460731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:46:07.460746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:46:07.460759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:46:07.460767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 04:46:07.460784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:46:08.677011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:46:08.677135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:46:08.677161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7aa79787', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:46:08.677183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:46:08.677188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:46:08.677192 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:08.677197 | orchestrator | 2026-02-05 04:46:08.677202 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 04:46:08.677207 | orchestrator | Thursday 05 February 2026 04:46:07 +0000 (0:00:01.236) 0:05:52.266 ***** 2026-02-05 04:46:08.677223 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:08.677229 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:08.677233 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:08.677243 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:08.677251 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:08.677255 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:08.677264 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:32.395437 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7aa79787', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:32.395597 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:32.395618 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:46:32.395632 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:32.395646 | orchestrator | 2026-02-05 04:46:32.395659 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 04:46:32.395672 | orchestrator | Thursday 05 February 2026 04:46:08 +0000 (0:00:01.223) 0:05:53.489 ***** 2026-02-05 04:46:32.395683 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:46:32.395695 | orchestrator | 2026-02-05 04:46:32.395706 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 04:46:32.395717 | orchestrator | Thursday 05 February 2026 04:46:10 +0000 (0:00:01.537) 0:05:55.027 ***** 2026-02-05 04:46:32.395728 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:46:32.395739 | orchestrator | 2026-02-05 04:46:32.395750 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 04:46:32.395790 | orchestrator | Thursday 05 February 2026 04:46:11 +0000 (0:00:01.141) 0:05:56.168 ***** 2026-02-05 04:46:32.395810 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:46:32.395822 | orchestrator | 2026-02-05 04:46:32.395833 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 04:46:32.395844 | orchestrator | Thursday 05 February 2026 04:46:12 +0000 (0:00:01.502) 0:05:57.670 ***** 2026-02-05 04:46:32.395855 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:32.395866 | orchestrator | 2026-02-05 04:46:32.395876 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 04:46:32.395887 | orchestrator | Thursday 05 February 2026 04:46:13 +0000 (0:00:01.123) 0:05:58.794 ***** 2026-02-05 04:46:32.395898 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:32.395909 | orchestrator | 2026-02-05 04:46:32.395931 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 04:46:32.395951 | orchestrator | Thursday 05 February 2026 04:46:15 +0000 (0:00:01.215) 0:06:00.010 ***** 2026-02-05 04:46:32.395969 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:32.395987 | orchestrator | 2026-02-05 04:46:32.396005 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 04:46:32.396023 | orchestrator | Thursday 05 February 2026 04:46:16 +0000 (0:00:01.167) 0:06:01.178 ***** 2026-02-05 04:46:32.396042 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:46:32.396092 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 04:46:32.396111 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 04:46:32.396130 | orchestrator | 2026-02-05 04:46:32.396149 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 04:46:32.396169 | orchestrator | Thursday 05 February 2026 04:46:18 +0000 (0:00:01.900) 0:06:03.078 ***** 2026-02-05 04:46:32.396187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 04:46:32.396204 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 04:46:32.396215 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 04:46:32.396226 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:32.396237 | orchestrator | 2026-02-05 04:46:32.396248 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 04:46:32.396259 | orchestrator | Thursday 05 February 2026 04:46:19 +0000 (0:00:01.193) 0:06:04.272 ***** 2026-02-05 04:46:32.396269 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:32.396280 | orchestrator | 2026-02-05 04:46:32.396291 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 04:46:32.396302 | orchestrator | Thursday 05 February 2026 04:46:20 +0000 (0:00:01.119) 0:06:05.392 ***** 2026-02-05 04:46:32.396312 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:46:32.396323 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:46:32.396335 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:46:32.396346 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 04:46:32.396356 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 04:46:32.396367 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 04:46:32.396385 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 04:46:32.396396 | orchestrator | 2026-02-05 04:46:32.396407 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 04:46:32.396418 | orchestrator | Thursday 05 February 2026 04:46:22 +0000 (0:00:02.102) 0:06:07.495 ***** 2026-02-05 04:46:32.396429 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:46:32.396440 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:46:32.396450 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:46:32.396461 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 04:46:32.396472 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 04:46:32.396482 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 04:46:32.396493 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 04:46:32.396503 | orchestrator | 2026-02-05 04:46:32.396514 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-05 04:46:32.396525 | orchestrator | Thursday 05 February 2026 04:46:25 +0000 (0:00:02.733) 0:06:10.228 ***** 2026-02-05 04:46:32.396536 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-05 04:46:32.396555 | orchestrator | 2026-02-05 04:46:32.396566 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-05 04:46:32.396577 | orchestrator | Thursday 05 February 2026 04:46:27 +0000 (0:00:02.293) 0:06:12.521 ***** 2026-02-05 04:46:32.396588 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:32.396598 | orchestrator | 2026-02-05 04:46:32.396609 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-05 04:46:32.396620 | orchestrator | Thursday 05 February 2026 04:46:28 +0000 (0:00:01.257) 0:06:13.779 ***** 2026-02-05 04:46:32.396631 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:46:32.396641 | orchestrator | 2026-02-05 04:46:32.396652 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-05 04:46:32.396663 | orchestrator | Thursday 05 February 2026 04:46:30 +0000 (0:00:01.118) 0:06:14.898 ***** 2026-02-05 04:46:32.396674 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-05 04:46:32.396684 | orchestrator | 2026-02-05 04:46:32.396695 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-05 04:46:32.396716 | orchestrator | Thursday 05 February 2026 04:46:32 +0000 (0:00:02.303) 0:06:17.201 ***** 2026-02-05 04:47:33.550262 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.550381 | orchestrator | 2026-02-05 04:47:33.550399 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-05 04:47:33.550412 | orchestrator | Thursday 05 February 2026 04:46:33 +0000 (0:00:01.133) 0:06:18.335 ***** 2026-02-05 04:47:33.550425 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:47:33.550437 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:47:33.550449 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:47:33.550460 | orchestrator | 2026-02-05 04:47:33.550472 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-05 04:47:33.550483 | orchestrator | Thursday 05 February 2026 04:46:36 +0000 (0:00:02.498) 0:06:20.834 ***** 2026-02-05 04:47:33.550497 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-05 04:47:33.550517 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-05 04:47:33.550544 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-05 04:47:33.550565 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-05 04:47:33.550586 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-05 04:47:33.550605 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-05 04:47:33.550622 | orchestrator | 2026-02-05 04:47:33.550640 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-05 04:47:33.550659 | orchestrator | Thursday 05 February 2026 04:46:49 +0000 (0:00:13.704) 0:06:34.538 ***** 2026-02-05 04:47:33.550679 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:47:33.550698 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:47:33.550717 | orchestrator | 2026-02-05 04:47:33.550735 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-05 04:47:33.550754 | orchestrator | Thursday 05 February 2026 04:46:53 +0000 (0:00:04.026) 0:06:38.565 ***** 2026-02-05 04:47:33.550773 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:47:33.550788 | orchestrator | 2026-02-05 04:47:33.550804 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 04:47:33.550824 | orchestrator | Thursday 05 February 2026 04:46:56 +0000 (0:00:02.678) 0:06:41.243 ***** 2026-02-05 04:47:33.550865 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-05 04:47:33.550918 | orchestrator | 2026-02-05 04:47:33.550938 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 04:47:33.550956 | orchestrator | Thursday 05 February 2026 04:46:57 +0000 (0:00:01.441) 0:06:42.685 ***** 2026-02-05 04:47:33.550974 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-05 04:47:33.550993 | orchestrator | 2026-02-05 04:47:33.551032 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 04:47:33.551050 | orchestrator | Thursday 05 February 2026 04:46:59 +0000 (0:00:01.585) 0:06:44.270 ***** 2026-02-05 04:47:33.551131 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:47:33.551152 | orchestrator | 2026-02-05 04:47:33.551172 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 04:47:33.551190 | orchestrator | Thursday 05 February 2026 04:47:01 +0000 (0:00:01.558) 0:06:45.829 ***** 2026-02-05 04:47:33.551209 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.551228 | orchestrator | 2026-02-05 04:47:33.551247 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 04:47:33.551266 | orchestrator | Thursday 05 February 2026 04:47:02 +0000 (0:00:01.094) 0:06:46.923 ***** 2026-02-05 04:47:33.551284 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.551303 | orchestrator | 2026-02-05 04:47:33.551321 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 04:47:33.551341 | orchestrator | Thursday 05 February 2026 04:47:03 +0000 (0:00:01.102) 0:06:48.026 ***** 2026-02-05 04:47:33.551359 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.551377 | orchestrator | 2026-02-05 04:47:33.551396 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 04:47:33.551414 | orchestrator | Thursday 05 February 2026 04:47:04 +0000 (0:00:01.092) 0:06:49.118 ***** 2026-02-05 04:47:33.551432 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:47:33.551449 | orchestrator | 2026-02-05 04:47:33.551468 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 04:47:33.551487 | orchestrator | Thursday 05 February 2026 04:47:05 +0000 (0:00:01.530) 0:06:50.648 ***** 2026-02-05 04:47:33.551504 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.551522 | orchestrator | 2026-02-05 04:47:33.551542 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 04:47:33.551559 | orchestrator | Thursday 05 February 2026 04:47:06 +0000 (0:00:01.102) 0:06:51.751 ***** 2026-02-05 04:47:33.551576 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.551593 | orchestrator | 2026-02-05 04:47:33.551611 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 04:47:33.551628 | orchestrator | Thursday 05 February 2026 04:47:08 +0000 (0:00:01.108) 0:06:52.859 ***** 2026-02-05 04:47:33.551646 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:47:33.551664 | orchestrator | 2026-02-05 04:47:33.551681 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 04:47:33.551699 | orchestrator | Thursday 05 February 2026 04:47:09 +0000 (0:00:01.547) 0:06:54.407 ***** 2026-02-05 04:47:33.551718 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:47:33.551735 | orchestrator | 2026-02-05 04:47:33.551784 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 04:47:33.551804 | orchestrator | Thursday 05 February 2026 04:47:11 +0000 (0:00:01.638) 0:06:56.046 ***** 2026-02-05 04:47:33.551821 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.551833 | orchestrator | 2026-02-05 04:47:33.551844 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 04:47:33.551855 | orchestrator | Thursday 05 February 2026 04:47:12 +0000 (0:00:01.102) 0:06:57.148 ***** 2026-02-05 04:47:33.551866 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:47:33.551877 | orchestrator | 2026-02-05 04:47:33.551888 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 04:47:33.551899 | orchestrator | Thursday 05 February 2026 04:47:13 +0000 (0:00:01.135) 0:06:58.284 ***** 2026-02-05 04:47:33.551910 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.551935 | orchestrator | 2026-02-05 04:47:33.551946 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 04:47:33.551958 | orchestrator | Thursday 05 February 2026 04:47:14 +0000 (0:00:01.126) 0:06:59.410 ***** 2026-02-05 04:47:33.551969 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.551979 | orchestrator | 2026-02-05 04:47:33.551997 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 04:47:33.552020 | orchestrator | Thursday 05 February 2026 04:47:15 +0000 (0:00:01.099) 0:07:00.509 ***** 2026-02-05 04:47:33.552046 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.552091 | orchestrator | 2026-02-05 04:47:33.552138 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 04:47:33.552174 | orchestrator | Thursday 05 February 2026 04:47:16 +0000 (0:00:01.102) 0:07:01.611 ***** 2026-02-05 04:47:33.552208 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.552239 | orchestrator | 2026-02-05 04:47:33.552272 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 04:47:33.552289 | orchestrator | Thursday 05 February 2026 04:47:17 +0000 (0:00:01.092) 0:07:02.704 ***** 2026-02-05 04:47:33.552305 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.552321 | orchestrator | 2026-02-05 04:47:33.552337 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 04:47:33.552354 | orchestrator | Thursday 05 February 2026 04:47:18 +0000 (0:00:01.094) 0:07:03.799 ***** 2026-02-05 04:47:33.552372 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:47:33.552389 | orchestrator | 2026-02-05 04:47:33.552407 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 04:47:33.552425 | orchestrator | Thursday 05 February 2026 04:47:20 +0000 (0:00:01.112) 0:07:04.912 ***** 2026-02-05 04:47:33.552443 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:47:33.552462 | orchestrator | 2026-02-05 04:47:33.552481 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 04:47:33.552500 | orchestrator | Thursday 05 February 2026 04:47:21 +0000 (0:00:01.135) 0:07:06.047 ***** 2026-02-05 04:47:33.552517 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:47:33.552535 | orchestrator | 2026-02-05 04:47:33.552554 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 04:47:33.552572 | orchestrator | Thursday 05 February 2026 04:47:22 +0000 (0:00:01.126) 0:07:07.173 ***** 2026-02-05 04:47:33.552591 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.552609 | orchestrator | 2026-02-05 04:47:33.552627 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 04:47:33.552659 | orchestrator | Thursday 05 February 2026 04:47:23 +0000 (0:00:01.123) 0:07:08.297 ***** 2026-02-05 04:47:33.552670 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.552681 | orchestrator | 2026-02-05 04:47:33.552692 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 04:47:33.552703 | orchestrator | Thursday 05 February 2026 04:47:24 +0000 (0:00:01.111) 0:07:09.408 ***** 2026-02-05 04:47:33.552713 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.552725 | orchestrator | 2026-02-05 04:47:33.552736 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 04:47:33.552746 | orchestrator | Thursday 05 February 2026 04:47:25 +0000 (0:00:01.088) 0:07:10.496 ***** 2026-02-05 04:47:33.552757 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.552768 | orchestrator | 2026-02-05 04:47:33.552779 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 04:47:33.552790 | orchestrator | Thursday 05 February 2026 04:47:26 +0000 (0:00:01.097) 0:07:11.594 ***** 2026-02-05 04:47:33.552801 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.552812 | orchestrator | 2026-02-05 04:47:33.552822 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 04:47:33.552833 | orchestrator | Thursday 05 February 2026 04:47:27 +0000 (0:00:01.120) 0:07:12.715 ***** 2026-02-05 04:47:33.552859 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.552870 | orchestrator | 2026-02-05 04:47:33.552881 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 04:47:33.552892 | orchestrator | Thursday 05 February 2026 04:47:29 +0000 (0:00:01.122) 0:07:13.838 ***** 2026-02-05 04:47:33.552903 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.552913 | orchestrator | 2026-02-05 04:47:33.552924 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 04:47:33.552935 | orchestrator | Thursday 05 February 2026 04:47:30 +0000 (0:00:01.166) 0:07:15.005 ***** 2026-02-05 04:47:33.552946 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.552957 | orchestrator | 2026-02-05 04:47:33.552968 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 04:47:33.552979 | orchestrator | Thursday 05 February 2026 04:47:31 +0000 (0:00:01.103) 0:07:16.108 ***** 2026-02-05 04:47:33.552990 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.553000 | orchestrator | 2026-02-05 04:47:33.553011 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 04:47:33.553022 | orchestrator | Thursday 05 February 2026 04:47:32 +0000 (0:00:01.142) 0:07:17.251 ***** 2026-02-05 04:47:33.553033 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:47:33.553044 | orchestrator | 2026-02-05 04:47:33.553054 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 04:47:33.553101 | orchestrator | Thursday 05 February 2026 04:47:33 +0000 (0:00:01.106) 0:07:18.358 ***** 2026-02-05 04:48:25.244766 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.244855 | orchestrator | 2026-02-05 04:48:25.244866 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 04:48:25.244873 | orchestrator | Thursday 05 February 2026 04:47:34 +0000 (0:00:01.087) 0:07:19.445 ***** 2026-02-05 04:48:25.244879 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.244884 | orchestrator | 2026-02-05 04:48:25.244890 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 04:48:25.244895 | orchestrator | Thursday 05 February 2026 04:47:35 +0000 (0:00:01.100) 0:07:20.545 ***** 2026-02-05 04:48:25.244901 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:48:25.244907 | orchestrator | 2026-02-05 04:48:25.244912 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 04:48:25.244917 | orchestrator | Thursday 05 February 2026 04:47:37 +0000 (0:00:02.006) 0:07:22.551 ***** 2026-02-05 04:48:25.244922 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:48:25.244927 | orchestrator | 2026-02-05 04:48:25.244933 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 04:48:25.244938 | orchestrator | Thursday 05 February 2026 04:47:40 +0000 (0:00:02.571) 0:07:25.122 ***** 2026-02-05 04:48:25.244943 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-05 04:48:25.244949 | orchestrator | 2026-02-05 04:48:25.244954 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 04:48:25.244959 | orchestrator | Thursday 05 February 2026 04:47:41 +0000 (0:00:01.448) 0:07:26.572 ***** 2026-02-05 04:48:25.244964 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.244970 | orchestrator | 2026-02-05 04:48:25.244975 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 04:48:25.244981 | orchestrator | Thursday 05 February 2026 04:47:42 +0000 (0:00:01.108) 0:07:27.680 ***** 2026-02-05 04:48:25.244986 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.244991 | orchestrator | 2026-02-05 04:48:25.244996 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 04:48:25.245001 | orchestrator | Thursday 05 February 2026 04:47:44 +0000 (0:00:01.177) 0:07:28.858 ***** 2026-02-05 04:48:25.245006 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 04:48:25.245012 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 04:48:25.245037 | orchestrator | 2026-02-05 04:48:25.245043 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 04:48:25.245048 | orchestrator | Thursday 05 February 2026 04:47:45 +0000 (0:00:01.917) 0:07:30.776 ***** 2026-02-05 04:48:25.245053 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:48:25.245058 | orchestrator | 2026-02-05 04:48:25.245063 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 04:48:25.245068 | orchestrator | Thursday 05 February 2026 04:47:47 +0000 (0:00:01.639) 0:07:32.416 ***** 2026-02-05 04:48:25.245074 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245125 | orchestrator | 2026-02-05 04:48:25.245131 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 04:48:25.245136 | orchestrator | Thursday 05 February 2026 04:47:48 +0000 (0:00:01.232) 0:07:33.648 ***** 2026-02-05 04:48:25.245141 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245146 | orchestrator | 2026-02-05 04:48:25.245151 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 04:48:25.245188 | orchestrator | Thursday 05 February 2026 04:47:49 +0000 (0:00:01.097) 0:07:34.746 ***** 2026-02-05 04:48:25.245194 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245200 | orchestrator | 2026-02-05 04:48:25.245205 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 04:48:25.245210 | orchestrator | Thursday 05 February 2026 04:47:51 +0000 (0:00:01.128) 0:07:35.875 ***** 2026-02-05 04:48:25.245216 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-05 04:48:25.245221 | orchestrator | 2026-02-05 04:48:25.245226 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 04:48:25.245232 | orchestrator | Thursday 05 February 2026 04:47:52 +0000 (0:00:01.442) 0:07:37.317 ***** 2026-02-05 04:48:25.245237 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:48:25.245243 | orchestrator | 2026-02-05 04:48:25.245248 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 04:48:25.245253 | orchestrator | Thursday 05 February 2026 04:47:54 +0000 (0:00:01.715) 0:07:39.033 ***** 2026-02-05 04:48:25.245258 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 04:48:25.245264 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 04:48:25.245269 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 04:48:25.245274 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245279 | orchestrator | 2026-02-05 04:48:25.245284 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 04:48:25.245289 | orchestrator | Thursday 05 February 2026 04:47:55 +0000 (0:00:01.151) 0:07:40.185 ***** 2026-02-05 04:48:25.245294 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245300 | orchestrator | 2026-02-05 04:48:25.245305 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 04:48:25.245310 | orchestrator | Thursday 05 February 2026 04:47:56 +0000 (0:00:01.130) 0:07:41.316 ***** 2026-02-05 04:48:25.245315 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245321 | orchestrator | 2026-02-05 04:48:25.245327 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 04:48:25.245333 | orchestrator | Thursday 05 February 2026 04:47:57 +0000 (0:00:01.174) 0:07:42.490 ***** 2026-02-05 04:48:25.245339 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245345 | orchestrator | 2026-02-05 04:48:25.245351 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 04:48:25.245370 | orchestrator | Thursday 05 February 2026 04:47:58 +0000 (0:00:01.190) 0:07:43.680 ***** 2026-02-05 04:48:25.245377 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245382 | orchestrator | 2026-02-05 04:48:25.245388 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 04:48:25.245394 | orchestrator | Thursday 05 February 2026 04:48:00 +0000 (0:00:01.284) 0:07:44.967 ***** 2026-02-05 04:48:25.245406 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245412 | orchestrator | 2026-02-05 04:48:25.245418 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 04:48:25.245423 | orchestrator | Thursday 05 February 2026 04:48:01 +0000 (0:00:01.137) 0:07:46.104 ***** 2026-02-05 04:48:25.245429 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:48:25.245435 | orchestrator | 2026-02-05 04:48:25.245441 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 04:48:25.245447 | orchestrator | Thursday 05 February 2026 04:48:03 +0000 (0:00:02.624) 0:07:48.728 ***** 2026-02-05 04:48:25.245453 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:48:25.245459 | orchestrator | 2026-02-05 04:48:25.245465 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 04:48:25.245471 | orchestrator | Thursday 05 February 2026 04:48:05 +0000 (0:00:01.118) 0:07:49.847 ***** 2026-02-05 04:48:25.245477 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-05 04:48:25.245483 | orchestrator | 2026-02-05 04:48:25.245489 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 04:48:25.245495 | orchestrator | Thursday 05 February 2026 04:48:06 +0000 (0:00:01.440) 0:07:51.287 ***** 2026-02-05 04:48:25.245501 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245507 | orchestrator | 2026-02-05 04:48:25.245512 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 04:48:25.245518 | orchestrator | Thursday 05 February 2026 04:48:07 +0000 (0:00:01.126) 0:07:52.414 ***** 2026-02-05 04:48:25.245524 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245530 | orchestrator | 2026-02-05 04:48:25.245535 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 04:48:25.245541 | orchestrator | Thursday 05 February 2026 04:48:08 +0000 (0:00:01.118) 0:07:53.533 ***** 2026-02-05 04:48:25.245547 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245553 | orchestrator | 2026-02-05 04:48:25.245559 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 04:48:25.245565 | orchestrator | Thursday 05 February 2026 04:48:09 +0000 (0:00:01.139) 0:07:54.672 ***** 2026-02-05 04:48:25.245571 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245576 | orchestrator | 2026-02-05 04:48:25.245582 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 04:48:25.245588 | orchestrator | Thursday 05 February 2026 04:48:10 +0000 (0:00:01.149) 0:07:55.821 ***** 2026-02-05 04:48:25.245593 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245598 | orchestrator | 2026-02-05 04:48:25.245603 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 04:48:25.245608 | orchestrator | Thursday 05 February 2026 04:48:12 +0000 (0:00:01.112) 0:07:56.934 ***** 2026-02-05 04:48:25.245613 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245618 | orchestrator | 2026-02-05 04:48:25.245624 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 04:48:25.245632 | orchestrator | Thursday 05 February 2026 04:48:13 +0000 (0:00:01.125) 0:07:58.060 ***** 2026-02-05 04:48:25.245637 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245642 | orchestrator | 2026-02-05 04:48:25.245647 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 04:48:25.245652 | orchestrator | Thursday 05 February 2026 04:48:14 +0000 (0:00:01.117) 0:07:59.177 ***** 2026-02-05 04:48:25.245657 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:48:25.245662 | orchestrator | 2026-02-05 04:48:25.245667 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 04:48:25.245673 | orchestrator | Thursday 05 February 2026 04:48:15 +0000 (0:00:01.126) 0:08:00.304 ***** 2026-02-05 04:48:25.245678 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:48:25.245683 | orchestrator | 2026-02-05 04:48:25.245688 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 04:48:25.245697 | orchestrator | Thursday 05 February 2026 04:48:16 +0000 (0:00:01.196) 0:08:01.500 ***** 2026-02-05 04:48:25.245702 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-05 04:48:25.245707 | orchestrator | 2026-02-05 04:48:25.245713 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 04:48:25.245718 | orchestrator | Thursday 05 February 2026 04:48:18 +0000 (0:00:01.431) 0:08:02.932 ***** 2026-02-05 04:48:25.245723 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-05 04:48:25.245728 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-05 04:48:25.245734 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-05 04:48:25.245739 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-05 04:48:25.245744 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-05 04:48:25.245749 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-05 04:48:25.245754 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-05 04:48:25.245759 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-05 04:48:25.245765 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 04:48:25.245770 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 04:48:25.245775 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 04:48:25.245780 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 04:48:25.245785 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 04:48:25.245790 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 04:48:25.245798 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-05 04:49:12.248624 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-05 04:49:12.248757 | orchestrator | 2026-02-05 04:49:12.248786 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 04:49:12.248805 | orchestrator | Thursday 05 February 2026 04:48:25 +0000 (0:00:07.108) 0:08:10.040 ***** 2026-02-05 04:49:12.248822 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.248876 | orchestrator | 2026-02-05 04:49:12.248894 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 04:49:12.248911 | orchestrator | Thursday 05 February 2026 04:48:26 +0000 (0:00:01.129) 0:08:11.169 ***** 2026-02-05 04:49:12.248928 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.248944 | orchestrator | 2026-02-05 04:49:12.248960 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 04:49:12.248977 | orchestrator | Thursday 05 February 2026 04:48:27 +0000 (0:00:01.165) 0:08:12.335 ***** 2026-02-05 04:49:12.248992 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249007 | orchestrator | 2026-02-05 04:49:12.249023 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 04:49:12.249038 | orchestrator | Thursday 05 February 2026 04:48:28 +0000 (0:00:01.166) 0:08:13.502 ***** 2026-02-05 04:49:12.249056 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249071 | orchestrator | 2026-02-05 04:49:12.249087 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 04:49:12.249139 | orchestrator | Thursday 05 February 2026 04:48:29 +0000 (0:00:01.141) 0:08:14.643 ***** 2026-02-05 04:49:12.249158 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249174 | orchestrator | 2026-02-05 04:49:12.249190 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 04:49:12.249206 | orchestrator | Thursday 05 February 2026 04:48:30 +0000 (0:00:01.104) 0:08:15.747 ***** 2026-02-05 04:49:12.249223 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249239 | orchestrator | 2026-02-05 04:49:12.249256 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 04:49:12.249274 | orchestrator | Thursday 05 February 2026 04:48:32 +0000 (0:00:01.128) 0:08:16.876 ***** 2026-02-05 04:49:12.249324 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249342 | orchestrator | 2026-02-05 04:49:12.249360 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 04:49:12.249379 | orchestrator | Thursday 05 February 2026 04:48:33 +0000 (0:00:01.126) 0:08:18.003 ***** 2026-02-05 04:49:12.249395 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249411 | orchestrator | 2026-02-05 04:49:12.249423 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 04:49:12.249435 | orchestrator | Thursday 05 February 2026 04:48:34 +0000 (0:00:01.142) 0:08:19.145 ***** 2026-02-05 04:49:12.249446 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249458 | orchestrator | 2026-02-05 04:49:12.249468 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 04:49:12.249478 | orchestrator | Thursday 05 February 2026 04:48:35 +0000 (0:00:01.093) 0:08:20.239 ***** 2026-02-05 04:49:12.249488 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249498 | orchestrator | 2026-02-05 04:49:12.249507 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 04:49:12.249533 | orchestrator | Thursday 05 February 2026 04:48:36 +0000 (0:00:01.109) 0:08:21.348 ***** 2026-02-05 04:49:12.249543 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249553 | orchestrator | 2026-02-05 04:49:12.249562 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 04:49:12.249572 | orchestrator | Thursday 05 February 2026 04:48:37 +0000 (0:00:01.113) 0:08:22.462 ***** 2026-02-05 04:49:12.249582 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249591 | orchestrator | 2026-02-05 04:49:12.249601 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 04:49:12.249633 | orchestrator | Thursday 05 February 2026 04:48:38 +0000 (0:00:01.113) 0:08:23.575 ***** 2026-02-05 04:49:12.249644 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249653 | orchestrator | 2026-02-05 04:49:12.249663 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 04:49:12.249673 | orchestrator | Thursday 05 February 2026 04:48:39 +0000 (0:00:01.201) 0:08:24.777 ***** 2026-02-05 04:49:12.249682 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249692 | orchestrator | 2026-02-05 04:49:12.249701 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 04:49:12.249711 | orchestrator | Thursday 05 February 2026 04:48:41 +0000 (0:00:01.155) 0:08:25.932 ***** 2026-02-05 04:49:12.249721 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249730 | orchestrator | 2026-02-05 04:49:12.249740 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 04:49:12.249750 | orchestrator | Thursday 05 February 2026 04:48:42 +0000 (0:00:01.181) 0:08:27.114 ***** 2026-02-05 04:49:12.249759 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249770 | orchestrator | 2026-02-05 04:49:12.249786 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 04:49:12.249802 | orchestrator | Thursday 05 February 2026 04:48:43 +0000 (0:00:01.144) 0:08:28.258 ***** 2026-02-05 04:49:12.249821 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249844 | orchestrator | 2026-02-05 04:49:12.249861 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 04:49:12.249879 | orchestrator | Thursday 05 February 2026 04:48:44 +0000 (0:00:01.097) 0:08:29.356 ***** 2026-02-05 04:49:12.249957 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.249977 | orchestrator | 2026-02-05 04:49:12.249994 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 04:49:12.250010 | orchestrator | Thursday 05 February 2026 04:48:45 +0000 (0:00:01.130) 0:08:30.486 ***** 2026-02-05 04:49:12.250128 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.250145 | orchestrator | 2026-02-05 04:49:12.250188 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 04:49:12.250235 | orchestrator | Thursday 05 February 2026 04:48:46 +0000 (0:00:01.120) 0:08:31.606 ***** 2026-02-05 04:49:12.250252 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.250264 | orchestrator | 2026-02-05 04:49:12.250274 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 04:49:12.250284 | orchestrator | Thursday 05 February 2026 04:48:47 +0000 (0:00:01.138) 0:08:32.745 ***** 2026-02-05 04:49:12.250293 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.250303 | orchestrator | 2026-02-05 04:49:12.250313 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 04:49:12.250322 | orchestrator | Thursday 05 February 2026 04:48:49 +0000 (0:00:01.138) 0:08:33.884 ***** 2026-02-05 04:49:12.250332 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 04:49:12.250343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 04:49:12.250352 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-05 04:49:12.250362 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.250371 | orchestrator | 2026-02-05 04:49:12.250381 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 04:49:12.250390 | orchestrator | Thursday 05 February 2026 04:48:50 +0000 (0:00:01.636) 0:08:35.521 ***** 2026-02-05 04:49:12.250400 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 04:49:12.250410 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 04:49:12.250419 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-05 04:49:12.250429 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.250438 | orchestrator | 2026-02-05 04:49:12.250448 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 04:49:12.250458 | orchestrator | Thursday 05 February 2026 04:48:51 +0000 (0:00:01.282) 0:08:36.803 ***** 2026-02-05 04:49:12.250467 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 04:49:12.250477 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 04:49:12.250486 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-05 04:49:12.250496 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.250506 | orchestrator | 2026-02-05 04:49:12.250515 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 04:49:12.250525 | orchestrator | Thursday 05 February 2026 04:48:53 +0000 (0:00:01.342) 0:08:38.145 ***** 2026-02-05 04:49:12.250534 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.250544 | orchestrator | 2026-02-05 04:49:12.250553 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 04:49:12.250563 | orchestrator | Thursday 05 February 2026 04:48:54 +0000 (0:00:01.098) 0:08:39.244 ***** 2026-02-05 04:49:12.250573 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-05 04:49:12.250583 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.250592 | orchestrator | 2026-02-05 04:49:12.250602 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 04:49:12.250611 | orchestrator | Thursday 05 February 2026 04:48:55 +0000 (0:00:01.355) 0:08:40.599 ***** 2026-02-05 04:49:12.250621 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:49:12.250631 | orchestrator | 2026-02-05 04:49:12.250640 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-05 04:49:12.250657 | orchestrator | Thursday 05 February 2026 04:48:57 +0000 (0:00:01.702) 0:08:42.302 ***** 2026-02-05 04:49:12.250667 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:49:12.250677 | orchestrator | 2026-02-05 04:49:12.250686 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-05 04:49:12.250696 | orchestrator | Thursday 05 February 2026 04:48:58 +0000 (0:00:01.112) 0:08:43.415 ***** 2026-02-05 04:49:12.250706 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-05 04:49:12.250716 | orchestrator | 2026-02-05 04:49:12.250733 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-05 04:49:12.250762 | orchestrator | Thursday 05 February 2026 04:49:00 +0000 (0:00:01.511) 0:08:44.926 ***** 2026-02-05 04:49:12.250772 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-05 04:49:12.250782 | orchestrator | 2026-02-05 04:49:12.250791 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-05 04:49:12.250801 | orchestrator | Thursday 05 February 2026 04:49:03 +0000 (0:00:03.405) 0:08:48.331 ***** 2026-02-05 04:49:12.250810 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:49:12.250820 | orchestrator | 2026-02-05 04:49:12.250829 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-05 04:49:12.250839 | orchestrator | Thursday 05 February 2026 04:49:04 +0000 (0:00:01.130) 0:08:49.462 ***** 2026-02-05 04:49:12.250848 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:49:12.250858 | orchestrator | 2026-02-05 04:49:12.250867 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-05 04:49:12.250877 | orchestrator | Thursday 05 February 2026 04:49:05 +0000 (0:00:01.147) 0:08:50.609 ***** 2026-02-05 04:49:12.250886 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:49:12.250896 | orchestrator | 2026-02-05 04:49:12.250905 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-05 04:49:12.250915 | orchestrator | Thursday 05 February 2026 04:49:06 +0000 (0:00:01.132) 0:08:51.741 ***** 2026-02-05 04:49:12.250925 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:49:12.250934 | orchestrator | 2026-02-05 04:49:12.250944 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-05 04:49:12.250953 | orchestrator | Thursday 05 February 2026 04:49:09 +0000 (0:00:02.154) 0:08:53.896 ***** 2026-02-05 04:49:12.250963 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:49:12.250977 | orchestrator | 2026-02-05 04:49:12.250994 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-05 04:49:12.251010 | orchestrator | Thursday 05 February 2026 04:49:10 +0000 (0:00:01.639) 0:08:55.536 ***** 2026-02-05 04:49:12.251026 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:49:12.251043 | orchestrator | 2026-02-05 04:49:12.251068 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-05 04:50:10.820280 | orchestrator | Thursday 05 February 2026 04:49:12 +0000 (0:00:01.519) 0:08:57.056 ***** 2026-02-05 04:50:10.820425 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:10.820444 | orchestrator | 2026-02-05 04:50:10.820457 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-05 04:50:10.820467 | orchestrator | Thursday 05 February 2026 04:49:13 +0000 (0:00:01.521) 0:08:58.578 ***** 2026-02-05 04:50:10.820477 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:10.820487 | orchestrator | 2026-02-05 04:50:10.820496 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-05 04:50:10.820506 | orchestrator | Thursday 05 February 2026 04:49:15 +0000 (0:00:01.720) 0:09:00.298 ***** 2026-02-05 04:50:10.820516 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:10.820526 | orchestrator | 2026-02-05 04:50:10.820535 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-05 04:50:10.820545 | orchestrator | Thursday 05 February 2026 04:49:17 +0000 (0:00:01.674) 0:09:01.973 ***** 2026-02-05 04:50:10.820555 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-05 04:50:10.820566 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 04:50:10.820576 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 04:50:10.820585 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-05 04:50:10.820595 | orchestrator | 2026-02-05 04:50:10.820605 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-05 04:50:10.820615 | orchestrator | Thursday 05 February 2026 04:49:21 +0000 (0:00:04.009) 0:09:05.982 ***** 2026-02-05 04:50:10.820624 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:50:10.820634 | orchestrator | 2026-02-05 04:50:10.820667 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-05 04:50:10.820678 | orchestrator | Thursday 05 February 2026 04:49:23 +0000 (0:00:02.077) 0:09:08.060 ***** 2026-02-05 04:50:10.820687 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:10.820697 | orchestrator | 2026-02-05 04:50:10.820706 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-05 04:50:10.820716 | orchestrator | Thursday 05 February 2026 04:49:24 +0000 (0:00:01.111) 0:09:09.172 ***** 2026-02-05 04:50:10.820726 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:10.820735 | orchestrator | 2026-02-05 04:50:10.820745 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-05 04:50:10.820755 | orchestrator | Thursday 05 February 2026 04:49:25 +0000 (0:00:01.162) 0:09:10.335 ***** 2026-02-05 04:50:10.820764 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:10.820774 | orchestrator | 2026-02-05 04:50:10.820783 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-05 04:50:10.820793 | orchestrator | Thursday 05 February 2026 04:49:27 +0000 (0:00:02.043) 0:09:12.378 ***** 2026-02-05 04:50:10.820803 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:10.820812 | orchestrator | 2026-02-05 04:50:10.820822 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-05 04:50:10.820832 | orchestrator | Thursday 05 February 2026 04:49:29 +0000 (0:00:01.505) 0:09:13.884 ***** 2026-02-05 04:50:10.820842 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:50:10.820851 | orchestrator | 2026-02-05 04:50:10.820861 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-05 04:50:10.820871 | orchestrator | Thursday 05 February 2026 04:49:30 +0000 (0:00:01.144) 0:09:15.028 ***** 2026-02-05 04:50:10.820895 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-05 04:50:10.820906 | orchestrator | 2026-02-05 04:50:10.820916 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-05 04:50:10.820926 | orchestrator | Thursday 05 February 2026 04:49:31 +0000 (0:00:01.480) 0:09:16.508 ***** 2026-02-05 04:50:10.820935 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:50:10.820945 | orchestrator | 2026-02-05 04:50:10.820955 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-05 04:50:10.820965 | orchestrator | Thursday 05 February 2026 04:49:32 +0000 (0:00:01.097) 0:09:17.606 ***** 2026-02-05 04:50:10.820975 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:50:10.820984 | orchestrator | 2026-02-05 04:50:10.820994 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-05 04:50:10.821003 | orchestrator | Thursday 05 February 2026 04:49:33 +0000 (0:00:01.146) 0:09:18.752 ***** 2026-02-05 04:50:10.821013 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-05 04:50:10.821022 | orchestrator | 2026-02-05 04:50:10.821032 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-05 04:50:10.821042 | orchestrator | Thursday 05 February 2026 04:49:35 +0000 (0:00:01.457) 0:09:20.210 ***** 2026-02-05 04:50:10.821051 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:10.821061 | orchestrator | 2026-02-05 04:50:10.821070 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-05 04:50:10.821080 | orchestrator | Thursday 05 February 2026 04:49:37 +0000 (0:00:02.334) 0:09:22.545 ***** 2026-02-05 04:50:10.821089 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:10.821175 | orchestrator | 2026-02-05 04:50:10.821187 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-05 04:50:10.821196 | orchestrator | Thursday 05 February 2026 04:49:39 +0000 (0:00:02.017) 0:09:24.562 ***** 2026-02-05 04:50:10.821206 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:10.821216 | orchestrator | 2026-02-05 04:50:10.821225 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-05 04:50:10.821235 | orchestrator | Thursday 05 February 2026 04:49:42 +0000 (0:00:02.571) 0:09:27.134 ***** 2026-02-05 04:50:10.821245 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:50:10.821263 | orchestrator | 2026-02-05 04:50:10.821273 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-05 04:50:10.821283 | orchestrator | Thursday 05 February 2026 04:49:45 +0000 (0:00:03.456) 0:09:30.591 ***** 2026-02-05 04:50:10.821293 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-05 04:50:10.821303 | orchestrator | 2026-02-05 04:50:10.821329 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-05 04:50:10.821339 | orchestrator | Thursday 05 February 2026 04:49:47 +0000 (0:00:01.505) 0:09:32.096 ***** 2026-02-05 04:50:10.821349 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:10.821359 | orchestrator | 2026-02-05 04:50:10.821369 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-05 04:50:10.821379 | orchestrator | Thursday 05 February 2026 04:49:49 +0000 (0:00:02.328) 0:09:34.425 ***** 2026-02-05 04:50:10.821388 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:10.821398 | orchestrator | 2026-02-05 04:50:10.821408 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-05 04:50:10.821417 | orchestrator | Thursday 05 February 2026 04:49:52 +0000 (0:00:03.147) 0:09:37.573 ***** 2026-02-05 04:50:10.821427 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:50:10.821443 | orchestrator | 2026-02-05 04:50:10.821460 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-05 04:50:10.821469 | orchestrator | Thursday 05 February 2026 04:49:53 +0000 (0:00:01.138) 0:09:38.711 ***** 2026-02-05 04:50:10.821481 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-05 04:50:10.821495 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-05 04:50:10.821504 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-05 04:50:10.821515 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-05 04:50:10.821532 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-05 04:50:10.821543 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}])  2026-02-05 04:50:10.821562 | orchestrator | 2026-02-05 04:50:10.821577 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-05 04:50:10.821590 | orchestrator | Thursday 05 February 2026 04:50:04 +0000 (0:00:10.755) 0:09:49.467 ***** 2026-02-05 04:50:10.821600 | orchestrator | changed: [testbed-node-0] 2026-02-05 04:50:10.821609 | orchestrator | 2026-02-05 04:50:10.821619 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 04:50:10.821629 | orchestrator | Thursday 05 February 2026 04:50:07 +0000 (0:00:02.635) 0:09:52.103 ***** 2026-02-05 04:50:10.821639 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 04:50:10.821649 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 04:50:10.821658 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 04:50:10.821668 | orchestrator | 2026-02-05 04:50:10.821678 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 04:50:10.821688 | orchestrator | Thursday 05 February 2026 04:50:09 +0000 (0:00:02.137) 0:09:54.241 ***** 2026-02-05 04:50:10.821697 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 04:50:10.821707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 04:50:10.821717 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 04:50:10.821726 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:50:10.821736 | orchestrator | 2026-02-05 04:50:10.821746 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-05 04:50:10.821761 | orchestrator | Thursday 05 February 2026 04:50:10 +0000 (0:00:01.377) 0:09:55.618 ***** 2026-02-05 04:50:39.862372 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:50:39.862477 | orchestrator | 2026-02-05 04:50:39.862497 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-05 04:50:39.862511 | orchestrator | Thursday 05 February 2026 04:50:11 +0000 (0:00:01.100) 0:09:56.719 ***** 2026-02-05 04:50:39.862525 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:50:39.862538 | orchestrator | 2026-02-05 04:50:39.862551 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-05 04:50:39.862564 | orchestrator | 2026-02-05 04:50:39.862577 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-05 04:50:39.862588 | orchestrator | Thursday 05 February 2026 04:50:14 +0000 (0:00:02.159) 0:09:58.878 ***** 2026-02-05 04:50:39.862601 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:39.862614 | orchestrator | 2026-02-05 04:50:39.862626 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-05 04:50:39.862639 | orchestrator | Thursday 05 February 2026 04:50:15 +0000 (0:00:01.128) 0:10:00.007 ***** 2026-02-05 04:50:39.862650 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:39.862663 | orchestrator | 2026-02-05 04:50:39.862676 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-05 04:50:39.862689 | orchestrator | Thursday 05 February 2026 04:50:15 +0000 (0:00:00.780) 0:10:00.788 ***** 2026-02-05 04:50:39.862701 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:39.862713 | orchestrator | 2026-02-05 04:50:39.862725 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-05 04:50:39.862738 | orchestrator | Thursday 05 February 2026 04:50:16 +0000 (0:00:00.786) 0:10:01.574 ***** 2026-02-05 04:50:39.862750 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:39.862762 | orchestrator | 2026-02-05 04:50:39.862775 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 04:50:39.862787 | orchestrator | Thursday 05 February 2026 04:50:17 +0000 (0:00:00.776) 0:10:02.350 ***** 2026-02-05 04:50:39.862800 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-05 04:50:39.862812 | orchestrator | 2026-02-05 04:50:39.862824 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 04:50:39.862836 | orchestrator | Thursday 05 February 2026 04:50:18 +0000 (0:00:01.099) 0:10:03.450 ***** 2026-02-05 04:50:39.862849 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:39.862888 | orchestrator | 2026-02-05 04:50:39.862902 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 04:50:39.862915 | orchestrator | Thursday 05 February 2026 04:50:20 +0000 (0:00:01.488) 0:10:04.939 ***** 2026-02-05 04:50:39.862927 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:39.862938 | orchestrator | 2026-02-05 04:50:39.862951 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 04:50:39.862965 | orchestrator | Thursday 05 February 2026 04:50:21 +0000 (0:00:01.120) 0:10:06.059 ***** 2026-02-05 04:50:39.862977 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:39.862990 | orchestrator | 2026-02-05 04:50:39.863004 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 04:50:39.863016 | orchestrator | Thursday 05 February 2026 04:50:22 +0000 (0:00:01.467) 0:10:07.526 ***** 2026-02-05 04:50:39.863028 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:39.863041 | orchestrator | 2026-02-05 04:50:39.863055 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 04:50:39.863084 | orchestrator | Thursday 05 February 2026 04:50:23 +0000 (0:00:01.115) 0:10:08.642 ***** 2026-02-05 04:50:39.863127 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:39.863139 | orchestrator | 2026-02-05 04:50:39.863148 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 04:50:39.863157 | orchestrator | Thursday 05 February 2026 04:50:24 +0000 (0:00:01.102) 0:10:09.744 ***** 2026-02-05 04:50:39.863165 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:39.863174 | orchestrator | 2026-02-05 04:50:39.863182 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 04:50:39.863191 | orchestrator | Thursday 05 February 2026 04:50:26 +0000 (0:00:01.162) 0:10:10.907 ***** 2026-02-05 04:50:39.863199 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:39.863208 | orchestrator | 2026-02-05 04:50:39.863216 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 04:50:39.863225 | orchestrator | Thursday 05 February 2026 04:50:27 +0000 (0:00:01.139) 0:10:12.046 ***** 2026-02-05 04:50:39.863233 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:39.863241 | orchestrator | 2026-02-05 04:50:39.863250 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 04:50:39.863257 | orchestrator | Thursday 05 February 2026 04:50:28 +0000 (0:00:01.134) 0:10:13.181 ***** 2026-02-05 04:50:39.863265 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 04:50:39.863272 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 04:50:39.863280 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:50:39.863287 | orchestrator | 2026-02-05 04:50:39.863294 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 04:50:39.863301 | orchestrator | Thursday 05 February 2026 04:50:30 +0000 (0:00:02.032) 0:10:15.214 ***** 2026-02-05 04:50:39.863308 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:39.863315 | orchestrator | 2026-02-05 04:50:39.863322 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 04:50:39.863330 | orchestrator | Thursday 05 February 2026 04:50:31 +0000 (0:00:01.217) 0:10:16.432 ***** 2026-02-05 04:50:39.863337 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 04:50:39.863344 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 04:50:39.863355 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:50:39.863367 | orchestrator | 2026-02-05 04:50:39.863380 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 04:50:39.863388 | orchestrator | Thursday 05 February 2026 04:50:34 +0000 (0:00:02.886) 0:10:19.318 ***** 2026-02-05 04:50:39.863412 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 04:50:39.863420 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 04:50:39.863436 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 04:50:39.863444 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:39.863451 | orchestrator | 2026-02-05 04:50:39.863458 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 04:50:39.863465 | orchestrator | Thursday 05 February 2026 04:50:35 +0000 (0:00:01.435) 0:10:20.753 ***** 2026-02-05 04:50:39.863473 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 04:50:39.863484 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 04:50:39.863491 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 04:50:39.863498 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:39.863506 | orchestrator | 2026-02-05 04:50:39.863513 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 04:50:39.863520 | orchestrator | Thursday 05 February 2026 04:50:37 +0000 (0:00:01.597) 0:10:22.350 ***** 2026-02-05 04:50:39.863529 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 04:50:39.863540 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 04:50:39.863552 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 04:50:39.863560 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:39.863567 | orchestrator | 2026-02-05 04:50:39.863574 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 04:50:39.863582 | orchestrator | Thursday 05 February 2026 04:50:38 +0000 (0:00:01.144) 0:10:23.494 ***** 2026-02-05 04:50:39.863590 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 04:50:32.169534', 'end': '2026-02-05 04:50:32.227741', 'delta': '0:00:00.058207', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 04:50:39.863606 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'df4012ab4a61', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 04:50:32.730623', 'end': '2026-02-05 04:50:32.776020', 'delta': '0:00:00.045397', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['df4012ab4a61'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 04:50:58.179709 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '458f6feaf079', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 04:50:33.296901', 'end': '2026-02-05 04:50:33.346408', 'delta': '0:00:00.049507', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['458f6feaf079'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 04:50:58.179814 | orchestrator | 2026-02-05 04:50:58.179829 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 04:50:58.179839 | orchestrator | Thursday 05 February 2026 04:50:39 +0000 (0:00:01.176) 0:10:24.671 ***** 2026-02-05 04:50:58.179847 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:58.179856 | orchestrator | 2026-02-05 04:50:58.179864 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 04:50:58.179872 | orchestrator | Thursday 05 February 2026 04:50:41 +0000 (0:00:01.255) 0:10:25.927 ***** 2026-02-05 04:50:58.179880 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:58.179888 | orchestrator | 2026-02-05 04:50:58.179896 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 04:50:58.179904 | orchestrator | Thursday 05 February 2026 04:50:42 +0000 (0:00:01.213) 0:10:27.140 ***** 2026-02-05 04:50:58.179912 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:58.179920 | orchestrator | 2026-02-05 04:50:58.179927 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 04:50:58.179935 | orchestrator | Thursday 05 February 2026 04:50:43 +0000 (0:00:01.132) 0:10:28.273 ***** 2026-02-05 04:50:58.179943 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-05 04:50:58.179956 | orchestrator | 2026-02-05 04:50:58.179970 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 04:50:58.179983 | orchestrator | Thursday 05 February 2026 04:50:45 +0000 (0:00:01.971) 0:10:30.245 ***** 2026-02-05 04:50:58.179996 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:50:58.180009 | orchestrator | 2026-02-05 04:50:58.180022 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 04:50:58.180033 | orchestrator | Thursday 05 February 2026 04:50:46 +0000 (0:00:01.124) 0:10:31.369 ***** 2026-02-05 04:50:58.180048 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:58.180062 | orchestrator | 2026-02-05 04:50:58.180075 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 04:50:58.180135 | orchestrator | Thursday 05 February 2026 04:50:47 +0000 (0:00:01.102) 0:10:32.471 ***** 2026-02-05 04:50:58.180152 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:58.180163 | orchestrator | 2026-02-05 04:50:58.180171 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 04:50:58.180179 | orchestrator | Thursday 05 February 2026 04:50:49 +0000 (0:00:01.489) 0:10:33.961 ***** 2026-02-05 04:50:58.180188 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:58.180196 | orchestrator | 2026-02-05 04:50:58.180204 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 04:50:58.180232 | orchestrator | Thursday 05 February 2026 04:50:50 +0000 (0:00:01.111) 0:10:35.073 ***** 2026-02-05 04:50:58.180240 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:58.180249 | orchestrator | 2026-02-05 04:50:58.180258 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 04:50:58.180273 | orchestrator | Thursday 05 February 2026 04:50:51 +0000 (0:00:01.106) 0:10:36.179 ***** 2026-02-05 04:50:58.180286 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:58.180299 | orchestrator | 2026-02-05 04:50:58.180312 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 04:50:58.180324 | orchestrator | Thursday 05 February 2026 04:50:52 +0000 (0:00:01.098) 0:10:37.278 ***** 2026-02-05 04:50:58.180337 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:58.180350 | orchestrator | 2026-02-05 04:50:58.180362 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 04:50:58.180376 | orchestrator | Thursday 05 February 2026 04:50:53 +0000 (0:00:01.112) 0:10:38.390 ***** 2026-02-05 04:50:58.180390 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:58.180405 | orchestrator | 2026-02-05 04:50:58.180418 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 04:50:58.180432 | orchestrator | Thursday 05 February 2026 04:50:54 +0000 (0:00:01.099) 0:10:39.489 ***** 2026-02-05 04:50:58.180445 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:58.180454 | orchestrator | 2026-02-05 04:50:58.180463 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 04:50:58.180472 | orchestrator | Thursday 05 February 2026 04:50:55 +0000 (0:00:01.159) 0:10:40.649 ***** 2026-02-05 04:50:58.180481 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:58.180490 | orchestrator | 2026-02-05 04:50:58.180499 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 04:50:58.180508 | orchestrator | Thursday 05 February 2026 04:50:56 +0000 (0:00:01.114) 0:10:41.764 ***** 2026-02-05 04:50:58.180537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:50:58.180550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:50:58.180559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:50:58.180571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-36-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 04:50:58.180582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:50:58.180606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:50:58.180615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:50:58.180634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91e0d2c4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:50:59.381892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:50:59.382064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:50:59.382197 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:50:59.382215 | orchestrator | 2026-02-05 04:50:59.382228 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 04:50:59.382240 | orchestrator | Thursday 05 February 2026 04:50:58 +0000 (0:00:01.220) 0:10:42.984 ***** 2026-02-05 04:50:59.382270 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:50:59.382285 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:50:59.382296 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:50:59.382309 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-36-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:50:59.382345 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:50:59.382359 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:50:59.382384 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:50:59.382399 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91e0d2c4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:50:59.382422 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:51:32.024479 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:51:32.024586 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.024602 | orchestrator | 2026-02-05 04:51:32.024615 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 04:51:32.024626 | orchestrator | Thursday 05 February 2026 04:50:59 +0000 (0:00:01.212) 0:10:44.197 ***** 2026-02-05 04:51:32.024636 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:51:32.024647 | orchestrator | 2026-02-05 04:51:32.024657 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 04:51:32.024682 | orchestrator | Thursday 05 February 2026 04:51:00 +0000 (0:00:01.550) 0:10:45.748 ***** 2026-02-05 04:51:32.024692 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:51:32.024702 | orchestrator | 2026-02-05 04:51:32.024713 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 04:51:32.024722 | orchestrator | Thursday 05 February 2026 04:51:02 +0000 (0:00:01.106) 0:10:46.855 ***** 2026-02-05 04:51:32.024732 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:51:32.024743 | orchestrator | 2026-02-05 04:51:32.024752 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 04:51:32.024762 | orchestrator | Thursday 05 February 2026 04:51:03 +0000 (0:00:01.475) 0:10:48.330 ***** 2026-02-05 04:51:32.024772 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.024783 | orchestrator | 2026-02-05 04:51:32.024793 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 04:51:32.024803 | orchestrator | Thursday 05 February 2026 04:51:04 +0000 (0:00:01.107) 0:10:49.437 ***** 2026-02-05 04:51:32.024813 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.024822 | orchestrator | 2026-02-05 04:51:32.024832 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 04:51:32.024842 | orchestrator | Thursday 05 February 2026 04:51:05 +0000 (0:00:01.263) 0:10:50.701 ***** 2026-02-05 04:51:32.024854 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.024870 | orchestrator | 2026-02-05 04:51:32.024886 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 04:51:32.024903 | orchestrator | Thursday 05 February 2026 04:51:06 +0000 (0:00:01.119) 0:10:51.820 ***** 2026-02-05 04:51:32.024919 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-05 04:51:32.024930 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 04:51:32.024941 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-05 04:51:32.024950 | orchestrator | 2026-02-05 04:51:32.024960 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 04:51:32.024970 | orchestrator | Thursday 05 February 2026 04:51:08 +0000 (0:00:01.644) 0:10:53.465 ***** 2026-02-05 04:51:32.024979 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 04:51:32.024990 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 04:51:32.025002 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 04:51:32.025014 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.025025 | orchestrator | 2026-02-05 04:51:32.025037 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 04:51:32.025049 | orchestrator | Thursday 05 February 2026 04:51:09 +0000 (0:00:01.190) 0:10:54.656 ***** 2026-02-05 04:51:32.025060 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.025093 | orchestrator | 2026-02-05 04:51:32.025138 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 04:51:32.025149 | orchestrator | Thursday 05 February 2026 04:51:10 +0000 (0:00:01.117) 0:10:55.773 ***** 2026-02-05 04:51:32.025162 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 04:51:32.025173 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 04:51:32.025185 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:51:32.025196 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 04:51:32.025208 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 04:51:32.025219 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 04:51:32.025231 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 04:51:32.025243 | orchestrator | 2026-02-05 04:51:32.025254 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 04:51:32.025266 | orchestrator | Thursday 05 February 2026 04:51:12 +0000 (0:00:01.746) 0:10:57.520 ***** 2026-02-05 04:51:32.025277 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 04:51:32.025289 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 04:51:32.025300 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 04:51:32.025312 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 04:51:32.025339 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 04:51:32.025351 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 04:51:32.025363 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 04:51:32.025378 | orchestrator | 2026-02-05 04:51:32.025394 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-05 04:51:32.025410 | orchestrator | Thursday 05 February 2026 04:51:14 +0000 (0:00:01.838) 0:10:59.359 ***** 2026-02-05 04:51:32.025424 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.025437 | orchestrator | 2026-02-05 04:51:32.025462 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-05 04:51:32.025479 | orchestrator | Thursday 05 February 2026 04:51:15 +0000 (0:00:00.698) 0:11:00.057 ***** 2026-02-05 04:51:32.025495 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.025510 | orchestrator | 2026-02-05 04:51:32.025525 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-05 04:51:32.025542 | orchestrator | Thursday 05 February 2026 04:51:16 +0000 (0:00:00.827) 0:11:00.885 ***** 2026-02-05 04:51:32.025558 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.025575 | orchestrator | 2026-02-05 04:51:32.025585 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-05 04:51:32.025602 | orchestrator | Thursday 05 February 2026 04:51:16 +0000 (0:00:00.750) 0:11:01.636 ***** 2026-02-05 04:51:32.025612 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.025622 | orchestrator | 2026-02-05 04:51:32.025631 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-05 04:51:32.025641 | orchestrator | Thursday 05 February 2026 04:51:17 +0000 (0:00:00.823) 0:11:02.459 ***** 2026-02-05 04:51:32.025651 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.025661 | orchestrator | 2026-02-05 04:51:32.025671 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-05 04:51:32.025680 | orchestrator | Thursday 05 February 2026 04:51:18 +0000 (0:00:00.779) 0:11:03.239 ***** 2026-02-05 04:51:32.025690 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 04:51:32.025699 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 04:51:32.025719 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 04:51:32.025729 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.025739 | orchestrator | 2026-02-05 04:51:32.025748 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-05 04:51:32.025758 | orchestrator | Thursday 05 February 2026 04:51:19 +0000 (0:00:00.987) 0:11:04.227 ***** 2026-02-05 04:51:32.025768 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-05 04:51:32.025778 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-05 04:51:32.025788 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-05 04:51:32.025797 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-05 04:51:32.025807 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-05 04:51:32.025822 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-05 04:51:32.025838 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.025855 | orchestrator | 2026-02-05 04:51:32.025870 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-05 04:51:32.025886 | orchestrator | Thursday 05 February 2026 04:51:20 +0000 (0:00:01.283) 0:11:05.510 ***** 2026-02-05 04:51:32.025902 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 04:51:32.025918 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 04:51:32.025935 | orchestrator | 2026-02-05 04:51:32.025951 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-05 04:51:32.025968 | orchestrator | Thursday 05 February 2026 04:51:23 +0000 (0:00:03.124) 0:11:08.635 ***** 2026-02-05 04:51:32.025984 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:51:32.026001 | orchestrator | 2026-02-05 04:51:32.026094 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 04:51:32.026185 | orchestrator | Thursday 05 February 2026 04:51:26 +0000 (0:00:02.203) 0:11:10.838 ***** 2026-02-05 04:51:32.026203 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-05 04:51:32.026221 | orchestrator | 2026-02-05 04:51:32.026236 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 04:51:32.026251 | orchestrator | Thursday 05 February 2026 04:51:27 +0000 (0:00:01.088) 0:11:11.926 ***** 2026-02-05 04:51:32.026266 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-05 04:51:32.026283 | orchestrator | 2026-02-05 04:51:32.026299 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 04:51:32.026315 | orchestrator | Thursday 05 February 2026 04:51:28 +0000 (0:00:01.103) 0:11:13.030 ***** 2026-02-05 04:51:32.026331 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:51:32.026348 | orchestrator | 2026-02-05 04:51:32.026363 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 04:51:32.026378 | orchestrator | Thursday 05 February 2026 04:51:29 +0000 (0:00:01.536) 0:11:14.567 ***** 2026-02-05 04:51:32.026394 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.026409 | orchestrator | 2026-02-05 04:51:32.026425 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 04:51:32.026442 | orchestrator | Thursday 05 February 2026 04:51:30 +0000 (0:00:01.156) 0:11:15.723 ***** 2026-02-05 04:51:32.026457 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:51:32.026474 | orchestrator | 2026-02-05 04:51:32.026490 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 04:51:32.026524 | orchestrator | Thursday 05 February 2026 04:51:32 +0000 (0:00:01.110) 0:11:16.833 ***** 2026-02-05 04:52:13.291995 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.292164 | orchestrator | 2026-02-05 04:52:13.292186 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 04:52:13.292223 | orchestrator | Thursday 05 February 2026 04:51:33 +0000 (0:00:01.118) 0:11:17.952 ***** 2026-02-05 04:52:13.292236 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:13.292248 | orchestrator | 2026-02-05 04:52:13.292259 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 04:52:13.292270 | orchestrator | Thursday 05 February 2026 04:51:34 +0000 (0:00:01.543) 0:11:19.496 ***** 2026-02-05 04:52:13.292281 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.292292 | orchestrator | 2026-02-05 04:52:13.292303 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 04:52:13.292314 | orchestrator | Thursday 05 February 2026 04:51:35 +0000 (0:00:01.141) 0:11:20.638 ***** 2026-02-05 04:52:13.292325 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.292336 | orchestrator | 2026-02-05 04:52:13.292347 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 04:52:13.292358 | orchestrator | Thursday 05 February 2026 04:51:36 +0000 (0:00:01.104) 0:11:21.742 ***** 2026-02-05 04:52:13.292369 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:13.292380 | orchestrator | 2026-02-05 04:52:13.292391 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 04:52:13.292416 | orchestrator | Thursday 05 February 2026 04:51:38 +0000 (0:00:01.621) 0:11:23.364 ***** 2026-02-05 04:52:13.292427 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:13.292438 | orchestrator | 2026-02-05 04:52:13.292449 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 04:52:13.292460 | orchestrator | Thursday 05 February 2026 04:51:40 +0000 (0:00:01.580) 0:11:24.945 ***** 2026-02-05 04:52:13.292471 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.292481 | orchestrator | 2026-02-05 04:52:13.292492 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 04:52:13.292505 | orchestrator | Thursday 05 February 2026 04:51:40 +0000 (0:00:00.784) 0:11:25.729 ***** 2026-02-05 04:52:13.292517 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:13.292530 | orchestrator | 2026-02-05 04:52:13.292543 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 04:52:13.292555 | orchestrator | Thursday 05 February 2026 04:51:41 +0000 (0:00:00.776) 0:11:26.506 ***** 2026-02-05 04:52:13.292568 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.292581 | orchestrator | 2026-02-05 04:52:13.292593 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 04:52:13.292607 | orchestrator | Thursday 05 February 2026 04:51:42 +0000 (0:00:00.768) 0:11:27.274 ***** 2026-02-05 04:52:13.292620 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.292632 | orchestrator | 2026-02-05 04:52:13.292645 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 04:52:13.292657 | orchestrator | Thursday 05 February 2026 04:51:43 +0000 (0:00:00.799) 0:11:28.074 ***** 2026-02-05 04:52:13.292671 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.292683 | orchestrator | 2026-02-05 04:52:13.292696 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 04:52:13.292709 | orchestrator | Thursday 05 February 2026 04:51:44 +0000 (0:00:00.785) 0:11:28.859 ***** 2026-02-05 04:52:13.292721 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.292733 | orchestrator | 2026-02-05 04:52:13.292746 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 04:52:13.292759 | orchestrator | Thursday 05 February 2026 04:51:44 +0000 (0:00:00.760) 0:11:29.619 ***** 2026-02-05 04:52:13.292771 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.292783 | orchestrator | 2026-02-05 04:52:13.292796 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 04:52:13.292808 | orchestrator | Thursday 05 February 2026 04:51:45 +0000 (0:00:00.747) 0:11:30.367 ***** 2026-02-05 04:52:13.292821 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:13.292834 | orchestrator | 2026-02-05 04:52:13.292847 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 04:52:13.292866 | orchestrator | Thursday 05 February 2026 04:51:46 +0000 (0:00:00.779) 0:11:31.146 ***** 2026-02-05 04:52:13.292877 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:13.292888 | orchestrator | 2026-02-05 04:52:13.292899 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 04:52:13.292910 | orchestrator | Thursday 05 February 2026 04:51:47 +0000 (0:00:00.785) 0:11:31.931 ***** 2026-02-05 04:52:13.292920 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:13.292931 | orchestrator | 2026-02-05 04:52:13.292942 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 04:52:13.292953 | orchestrator | Thursday 05 February 2026 04:51:47 +0000 (0:00:00.866) 0:11:32.798 ***** 2026-02-05 04:52:13.292964 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.292975 | orchestrator | 2026-02-05 04:52:13.292986 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 04:52:13.292997 | orchestrator | Thursday 05 February 2026 04:51:48 +0000 (0:00:00.817) 0:11:33.616 ***** 2026-02-05 04:52:13.293008 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293019 | orchestrator | 2026-02-05 04:52:13.293030 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 04:52:13.293041 | orchestrator | Thursday 05 February 2026 04:51:49 +0000 (0:00:00.768) 0:11:34.385 ***** 2026-02-05 04:52:13.293052 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293062 | orchestrator | 2026-02-05 04:52:13.293073 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 04:52:13.293084 | orchestrator | Thursday 05 February 2026 04:51:50 +0000 (0:00:00.757) 0:11:35.142 ***** 2026-02-05 04:52:13.293095 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293169 | orchestrator | 2026-02-05 04:52:13.293183 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 04:52:13.293194 | orchestrator | Thursday 05 February 2026 04:51:51 +0000 (0:00:00.758) 0:11:35.901 ***** 2026-02-05 04:52:13.293205 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293216 | orchestrator | 2026-02-05 04:52:13.293246 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 04:52:13.293258 | orchestrator | Thursday 05 February 2026 04:51:51 +0000 (0:00:00.750) 0:11:36.652 ***** 2026-02-05 04:52:13.293269 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293279 | orchestrator | 2026-02-05 04:52:13.293290 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 04:52:13.293301 | orchestrator | Thursday 05 February 2026 04:51:52 +0000 (0:00:00.756) 0:11:37.408 ***** 2026-02-05 04:52:13.293312 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293323 | orchestrator | 2026-02-05 04:52:13.293334 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 04:52:13.293345 | orchestrator | Thursday 05 February 2026 04:51:53 +0000 (0:00:00.779) 0:11:38.188 ***** 2026-02-05 04:52:13.293356 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293367 | orchestrator | 2026-02-05 04:52:13.293378 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 04:52:13.293389 | orchestrator | Thursday 05 February 2026 04:51:54 +0000 (0:00:00.753) 0:11:38.941 ***** 2026-02-05 04:52:13.293400 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293410 | orchestrator | 2026-02-05 04:52:13.293421 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 04:52:13.293435 | orchestrator | Thursday 05 February 2026 04:51:54 +0000 (0:00:00.764) 0:11:39.706 ***** 2026-02-05 04:52:13.293452 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293468 | orchestrator | 2026-02-05 04:52:13.293491 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 04:52:13.293507 | orchestrator | Thursday 05 February 2026 04:51:55 +0000 (0:00:00.796) 0:11:40.502 ***** 2026-02-05 04:52:13.293522 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293537 | orchestrator | 2026-02-05 04:52:13.293554 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 04:52:13.293580 | orchestrator | Thursday 05 February 2026 04:51:56 +0000 (0:00:00.761) 0:11:41.264 ***** 2026-02-05 04:52:13.293595 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293611 | orchestrator | 2026-02-05 04:52:13.293627 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 04:52:13.293641 | orchestrator | Thursday 05 February 2026 04:51:57 +0000 (0:00:00.839) 0:11:42.103 ***** 2026-02-05 04:52:13.293655 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:13.293670 | orchestrator | 2026-02-05 04:52:13.293685 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 04:52:13.293700 | orchestrator | Thursday 05 February 2026 04:51:58 +0000 (0:00:01.638) 0:11:43.742 ***** 2026-02-05 04:52:13.293715 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:13.293731 | orchestrator | 2026-02-05 04:52:13.293747 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 04:52:13.293763 | orchestrator | Thursday 05 February 2026 04:52:01 +0000 (0:00:02.096) 0:11:45.838 ***** 2026-02-05 04:52:13.293778 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-05 04:52:13.293795 | orchestrator | 2026-02-05 04:52:13.293812 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 04:52:13.293828 | orchestrator | Thursday 05 February 2026 04:52:02 +0000 (0:00:01.091) 0:11:46.930 ***** 2026-02-05 04:52:13.293844 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293860 | orchestrator | 2026-02-05 04:52:13.293876 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 04:52:13.293893 | orchestrator | Thursday 05 February 2026 04:52:03 +0000 (0:00:01.114) 0:11:48.044 ***** 2026-02-05 04:52:13.293909 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.293925 | orchestrator | 2026-02-05 04:52:13.293942 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 04:52:13.293958 | orchestrator | Thursday 05 February 2026 04:52:04 +0000 (0:00:01.146) 0:11:49.190 ***** 2026-02-05 04:52:13.293974 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 04:52:13.293990 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 04:52:13.294005 | orchestrator | 2026-02-05 04:52:13.294089 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 04:52:13.294129 | orchestrator | Thursday 05 February 2026 04:52:06 +0000 (0:00:01.887) 0:11:51.078 ***** 2026-02-05 04:52:13.294147 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:13.294163 | orchestrator | 2026-02-05 04:52:13.294180 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 04:52:13.294196 | orchestrator | Thursday 05 February 2026 04:52:07 +0000 (0:00:01.433) 0:11:52.512 ***** 2026-02-05 04:52:13.294212 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.294227 | orchestrator | 2026-02-05 04:52:13.294244 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 04:52:13.294260 | orchestrator | Thursday 05 February 2026 04:52:08 +0000 (0:00:01.127) 0:11:53.640 ***** 2026-02-05 04:52:13.294276 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.294292 | orchestrator | 2026-02-05 04:52:13.294308 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 04:52:13.294324 | orchestrator | Thursday 05 February 2026 04:52:09 +0000 (0:00:00.767) 0:11:54.407 ***** 2026-02-05 04:52:13.294341 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:13.294357 | orchestrator | 2026-02-05 04:52:13.294373 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 04:52:13.294388 | orchestrator | Thursday 05 February 2026 04:52:10 +0000 (0:00:00.766) 0:11:55.174 ***** 2026-02-05 04:52:13.294403 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-05 04:52:13.294418 | orchestrator | 2026-02-05 04:52:13.294433 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 04:52:13.294459 | orchestrator | Thursday 05 February 2026 04:52:11 +0000 (0:00:01.193) 0:11:56.367 ***** 2026-02-05 04:52:13.294476 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:13.294492 | orchestrator | 2026-02-05 04:52:13.294508 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 04:52:13.294541 | orchestrator | Thursday 05 February 2026 04:52:13 +0000 (0:00:01.734) 0:11:58.102 ***** 2026-02-05 04:52:52.551581 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 04:52:52.551699 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 04:52:52.551717 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 04:52:52.551729 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.551743 | orchestrator | 2026-02-05 04:52:52.551756 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 04:52:52.551768 | orchestrator | Thursday 05 February 2026 04:52:14 +0000 (0:00:01.141) 0:11:59.243 ***** 2026-02-05 04:52:52.551779 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.551790 | orchestrator | 2026-02-05 04:52:52.551801 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 04:52:52.551812 | orchestrator | Thursday 05 February 2026 04:52:15 +0000 (0:00:01.141) 0:12:00.384 ***** 2026-02-05 04:52:52.551823 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.551834 | orchestrator | 2026-02-05 04:52:52.551845 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 04:52:52.551857 | orchestrator | Thursday 05 February 2026 04:52:16 +0000 (0:00:01.206) 0:12:01.591 ***** 2026-02-05 04:52:52.551895 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.551937 | orchestrator | 2026-02-05 04:52:52.551970 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 04:52:52.551989 | orchestrator | Thursday 05 February 2026 04:52:17 +0000 (0:00:01.122) 0:12:02.714 ***** 2026-02-05 04:52:52.552008 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.552039 | orchestrator | 2026-02-05 04:52:52.552056 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 04:52:52.552073 | orchestrator | Thursday 05 February 2026 04:52:19 +0000 (0:00:01.151) 0:12:03.866 ***** 2026-02-05 04:52:52.552091 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.552109 | orchestrator | 2026-02-05 04:52:52.552155 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 04:52:52.552174 | orchestrator | Thursday 05 February 2026 04:52:19 +0000 (0:00:00.780) 0:12:04.646 ***** 2026-02-05 04:52:52.552192 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:52.552211 | orchestrator | 2026-02-05 04:52:52.552229 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 04:52:52.552249 | orchestrator | Thursday 05 February 2026 04:52:22 +0000 (0:00:02.245) 0:12:06.892 ***** 2026-02-05 04:52:52.552269 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:52.552289 | orchestrator | 2026-02-05 04:52:52.552338 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 04:52:52.552367 | orchestrator | Thursday 05 February 2026 04:52:22 +0000 (0:00:00.794) 0:12:07.687 ***** 2026-02-05 04:52:52.552394 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-05 04:52:52.552419 | orchestrator | 2026-02-05 04:52:52.552432 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 04:52:52.552445 | orchestrator | Thursday 05 February 2026 04:52:23 +0000 (0:00:01.099) 0:12:08.787 ***** 2026-02-05 04:52:52.552457 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.552468 | orchestrator | 2026-02-05 04:52:52.552559 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 04:52:52.552571 | orchestrator | Thursday 05 February 2026 04:52:25 +0000 (0:00:01.113) 0:12:09.900 ***** 2026-02-05 04:52:52.552583 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.552619 | orchestrator | 2026-02-05 04:52:52.552631 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 04:52:52.552642 | orchestrator | Thursday 05 February 2026 04:52:26 +0000 (0:00:01.146) 0:12:11.046 ***** 2026-02-05 04:52:52.552653 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.552664 | orchestrator | 2026-02-05 04:52:52.552675 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 04:52:52.552686 | orchestrator | Thursday 05 February 2026 04:52:27 +0000 (0:00:01.140) 0:12:12.186 ***** 2026-02-05 04:52:52.552697 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.552707 | orchestrator | 2026-02-05 04:52:52.552719 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 04:52:52.552729 | orchestrator | Thursday 05 February 2026 04:52:28 +0000 (0:00:01.118) 0:12:13.305 ***** 2026-02-05 04:52:52.552740 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.552751 | orchestrator | 2026-02-05 04:52:52.552762 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 04:52:52.552773 | orchestrator | Thursday 05 February 2026 04:52:29 +0000 (0:00:01.145) 0:12:14.450 ***** 2026-02-05 04:52:52.552784 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.552795 | orchestrator | 2026-02-05 04:52:52.552806 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 04:52:52.552817 | orchestrator | Thursday 05 February 2026 04:52:30 +0000 (0:00:01.190) 0:12:15.640 ***** 2026-02-05 04:52:52.552828 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.552839 | orchestrator | 2026-02-05 04:52:52.552850 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 04:52:52.552861 | orchestrator | Thursday 05 February 2026 04:52:31 +0000 (0:00:01.126) 0:12:16.767 ***** 2026-02-05 04:52:52.552871 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.552882 | orchestrator | 2026-02-05 04:52:52.552893 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 04:52:52.552904 | orchestrator | Thursday 05 February 2026 04:52:33 +0000 (0:00:01.154) 0:12:17.922 ***** 2026-02-05 04:52:52.552920 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:52:52.552938 | orchestrator | 2026-02-05 04:52:52.552965 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 04:52:52.552986 | orchestrator | Thursday 05 February 2026 04:52:33 +0000 (0:00:00.797) 0:12:18.719 ***** 2026-02-05 04:52:52.553005 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-05 04:52:52.553024 | orchestrator | 2026-02-05 04:52:52.553041 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 04:52:52.553088 | orchestrator | Thursday 05 February 2026 04:52:35 +0000 (0:00:01.109) 0:12:19.829 ***** 2026-02-05 04:52:52.553107 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-05 04:52:52.553162 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-05 04:52:52.553181 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-05 04:52:52.553200 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-05 04:52:52.553219 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-05 04:52:52.553236 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-05 04:52:52.553254 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-05 04:52:52.553271 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-05 04:52:52.553290 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 04:52:52.553308 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 04:52:52.553326 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 04:52:52.553344 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 04:52:52.553375 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 04:52:52.553388 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 04:52:52.553412 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-05 04:52:52.553427 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-05 04:52:52.553450 | orchestrator | 2026-02-05 04:52:52.553477 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 04:52:52.553495 | orchestrator | Thursday 05 February 2026 04:52:41 +0000 (0:00:06.654) 0:12:26.484 ***** 2026-02-05 04:52:52.553512 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.553530 | orchestrator | 2026-02-05 04:52:52.553546 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 04:52:52.553565 | orchestrator | Thursday 05 February 2026 04:52:42 +0000 (0:00:00.771) 0:12:27.256 ***** 2026-02-05 04:52:52.553583 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.553603 | orchestrator | 2026-02-05 04:52:52.553621 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 04:52:52.553639 | orchestrator | Thursday 05 February 2026 04:52:43 +0000 (0:00:00.774) 0:12:28.030 ***** 2026-02-05 04:52:52.553657 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.553668 | orchestrator | 2026-02-05 04:52:52.553679 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 04:52:52.553690 | orchestrator | Thursday 05 February 2026 04:52:43 +0000 (0:00:00.759) 0:12:28.790 ***** 2026-02-05 04:52:52.553700 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.553711 | orchestrator | 2026-02-05 04:52:52.553722 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 04:52:52.553732 | orchestrator | Thursday 05 February 2026 04:52:44 +0000 (0:00:00.767) 0:12:29.558 ***** 2026-02-05 04:52:52.553765 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.553776 | orchestrator | 2026-02-05 04:52:52.553787 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 04:52:52.553798 | orchestrator | Thursday 05 February 2026 04:52:45 +0000 (0:00:00.779) 0:12:30.337 ***** 2026-02-05 04:52:52.553809 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.553819 | orchestrator | 2026-02-05 04:52:52.553830 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 04:52:52.553841 | orchestrator | Thursday 05 February 2026 04:52:46 +0000 (0:00:00.749) 0:12:31.086 ***** 2026-02-05 04:52:52.553851 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.553862 | orchestrator | 2026-02-05 04:52:52.553873 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 04:52:52.553884 | orchestrator | Thursday 05 February 2026 04:52:47 +0000 (0:00:00.775) 0:12:31.862 ***** 2026-02-05 04:52:52.553895 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.553905 | orchestrator | 2026-02-05 04:52:52.553916 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 04:52:52.553927 | orchestrator | Thursday 05 February 2026 04:52:47 +0000 (0:00:00.772) 0:12:32.634 ***** 2026-02-05 04:52:52.553938 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.553949 | orchestrator | 2026-02-05 04:52:52.553959 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 04:52:52.553970 | orchestrator | Thursday 05 February 2026 04:52:48 +0000 (0:00:00.790) 0:12:33.424 ***** 2026-02-05 04:52:52.553981 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.553992 | orchestrator | 2026-02-05 04:52:52.554002 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 04:52:52.554013 | orchestrator | Thursday 05 February 2026 04:52:49 +0000 (0:00:00.782) 0:12:34.207 ***** 2026-02-05 04:52:52.554093 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.554106 | orchestrator | 2026-02-05 04:52:52.554184 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 04:52:52.554212 | orchestrator | Thursday 05 February 2026 04:52:50 +0000 (0:00:00.756) 0:12:34.964 ***** 2026-02-05 04:52:52.554231 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.554263 | orchestrator | 2026-02-05 04:52:52.554281 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 04:52:52.554299 | orchestrator | Thursday 05 February 2026 04:52:50 +0000 (0:00:00.777) 0:12:35.741 ***** 2026-02-05 04:52:52.554318 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.554335 | orchestrator | 2026-02-05 04:52:52.554352 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 04:52:52.554370 | orchestrator | Thursday 05 February 2026 04:52:51 +0000 (0:00:00.851) 0:12:36.593 ***** 2026-02-05 04:52:52.554388 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:52:52.554407 | orchestrator | 2026-02-05 04:52:52.554426 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 04:52:52.554486 | orchestrator | Thursday 05 February 2026 04:52:52 +0000 (0:00:00.761) 0:12:37.355 ***** 2026-02-05 04:53:40.822088 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.822270 | orchestrator | 2026-02-05 04:53:40.822299 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 04:53:40.822318 | orchestrator | Thursday 05 February 2026 04:52:54 +0000 (0:00:01.634) 0:12:38.990 ***** 2026-02-05 04:53:40.822335 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.822351 | orchestrator | 2026-02-05 04:53:40.822367 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 04:53:40.822384 | orchestrator | Thursday 05 February 2026 04:52:54 +0000 (0:00:00.756) 0:12:39.746 ***** 2026-02-05 04:53:40.822395 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.822404 | orchestrator | 2026-02-05 04:53:40.822414 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 04:53:40.822424 | orchestrator | Thursday 05 February 2026 04:52:55 +0000 (0:00:00.767) 0:12:40.513 ***** 2026-02-05 04:53:40.822433 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.822442 | orchestrator | 2026-02-05 04:53:40.822451 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 04:53:40.822474 | orchestrator | Thursday 05 February 2026 04:52:56 +0000 (0:00:00.768) 0:12:41.281 ***** 2026-02-05 04:53:40.822484 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.822493 | orchestrator | 2026-02-05 04:53:40.822502 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 04:53:40.822510 | orchestrator | Thursday 05 February 2026 04:52:57 +0000 (0:00:00.764) 0:12:42.046 ***** 2026-02-05 04:53:40.822519 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.822528 | orchestrator | 2026-02-05 04:53:40.822537 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 04:53:40.822546 | orchestrator | Thursday 05 February 2026 04:52:57 +0000 (0:00:00.765) 0:12:42.812 ***** 2026-02-05 04:53:40.822554 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.822563 | orchestrator | 2026-02-05 04:53:40.822572 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 04:53:40.822581 | orchestrator | Thursday 05 February 2026 04:52:58 +0000 (0:00:00.769) 0:12:43.582 ***** 2026-02-05 04:53:40.822590 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-05 04:53:40.822599 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-05 04:53:40.822614 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-05 04:53:40.822629 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.822642 | orchestrator | 2026-02-05 04:53:40.822657 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 04:53:40.822672 | orchestrator | Thursday 05 February 2026 04:52:59 +0000 (0:00:01.054) 0:12:44.636 ***** 2026-02-05 04:53:40.822687 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-05 04:53:40.822701 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-05 04:53:40.822716 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-05 04:53:40.822732 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.822777 | orchestrator | 2026-02-05 04:53:40.822787 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 04:53:40.822801 | orchestrator | Thursday 05 February 2026 04:53:00 +0000 (0:00:01.082) 0:12:45.719 ***** 2026-02-05 04:53:40.822814 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-05 04:53:40.822823 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-05 04:53:40.822832 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-05 04:53:40.822841 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.822849 | orchestrator | 2026-02-05 04:53:40.822858 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 04:53:40.822867 | orchestrator | Thursday 05 February 2026 04:53:01 +0000 (0:00:01.042) 0:12:46.762 ***** 2026-02-05 04:53:40.822875 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.822884 | orchestrator | 2026-02-05 04:53:40.822893 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 04:53:40.822902 | orchestrator | Thursday 05 February 2026 04:53:02 +0000 (0:00:00.768) 0:12:47.531 ***** 2026-02-05 04:53:40.822911 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-05 04:53:40.822920 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.822928 | orchestrator | 2026-02-05 04:53:40.822937 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 04:53:40.822946 | orchestrator | Thursday 05 February 2026 04:53:03 +0000 (0:00:00.904) 0:12:48.436 ***** 2026-02-05 04:53:40.822955 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:53:40.822965 | orchestrator | 2026-02-05 04:53:40.822973 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-05 04:53:40.822982 | orchestrator | Thursday 05 February 2026 04:53:05 +0000 (0:00:01.770) 0:12:50.206 ***** 2026-02-05 04:53:40.822991 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:53:40.822999 | orchestrator | 2026-02-05 04:53:40.823008 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-05 04:53:40.823017 | orchestrator | Thursday 05 February 2026 04:53:06 +0000 (0:00:00.817) 0:12:51.024 ***** 2026-02-05 04:53:40.823026 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-02-05 04:53:40.823035 | orchestrator | 2026-02-05 04:53:40.823047 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-05 04:53:40.823061 | orchestrator | Thursday 05 February 2026 04:53:07 +0000 (0:00:01.138) 0:12:52.162 ***** 2026-02-05 04:53:40.823101 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-05 04:53:40.823117 | orchestrator | 2026-02-05 04:53:40.823160 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-05 04:53:40.823176 | orchestrator | Thursday 05 February 2026 04:53:10 +0000 (0:00:03.187) 0:12:55.350 ***** 2026-02-05 04:53:40.823191 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.823206 | orchestrator | 2026-02-05 04:53:40.823218 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-05 04:53:40.823246 | orchestrator | Thursday 05 February 2026 04:53:11 +0000 (0:00:01.142) 0:12:56.492 ***** 2026-02-05 04:53:40.823256 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:53:40.823265 | orchestrator | 2026-02-05 04:53:40.823274 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-05 04:53:40.823282 | orchestrator | Thursday 05 February 2026 04:53:12 +0000 (0:00:01.132) 0:12:57.625 ***** 2026-02-05 04:53:40.823291 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:53:40.823300 | orchestrator | 2026-02-05 04:53:40.823308 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-05 04:53:40.823317 | orchestrator | Thursday 05 February 2026 04:53:13 +0000 (0:00:01.131) 0:12:58.757 ***** 2026-02-05 04:53:40.823326 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:53:40.823334 | orchestrator | 2026-02-05 04:53:40.823344 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-05 04:53:40.823362 | orchestrator | Thursday 05 February 2026 04:53:15 +0000 (0:00:02.061) 0:13:00.819 ***** 2026-02-05 04:53:40.823371 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:53:40.823380 | orchestrator | 2026-02-05 04:53:40.823389 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-05 04:53:40.823405 | orchestrator | Thursday 05 February 2026 04:53:17 +0000 (0:00:01.561) 0:13:02.380 ***** 2026-02-05 04:53:40.823414 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:53:40.823423 | orchestrator | 2026-02-05 04:53:40.823431 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-05 04:53:40.823440 | orchestrator | Thursday 05 February 2026 04:53:19 +0000 (0:00:01.494) 0:13:03.875 ***** 2026-02-05 04:53:40.823449 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:53:40.823457 | orchestrator | 2026-02-05 04:53:40.823466 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-05 04:53:40.823475 | orchestrator | Thursday 05 February 2026 04:53:20 +0000 (0:00:01.517) 0:13:05.392 ***** 2026-02-05 04:53:40.823483 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-05 04:53:40.823492 | orchestrator | 2026-02-05 04:53:40.823501 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-05 04:53:40.823510 | orchestrator | Thursday 05 February 2026 04:53:22 +0000 (0:00:01.916) 0:13:07.308 ***** 2026-02-05 04:53:40.823518 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-05 04:53:40.823527 | orchestrator | 2026-02-05 04:53:40.823536 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-05 04:53:40.823544 | orchestrator | Thursday 05 February 2026 04:53:24 +0000 (0:00:01.585) 0:13:08.894 ***** 2026-02-05 04:53:40.823553 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 04:53:40.823562 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-05 04:53:40.823571 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 04:53:40.823579 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-05 04:53:40.823588 | orchestrator | 2026-02-05 04:53:40.823597 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-05 04:53:40.823605 | orchestrator | Thursday 05 February 2026 04:53:27 +0000 (0:00:03.895) 0:13:12.789 ***** 2026-02-05 04:53:40.823614 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:53:40.823623 | orchestrator | 2026-02-05 04:53:40.823631 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-05 04:53:40.823640 | orchestrator | Thursday 05 February 2026 04:53:29 +0000 (0:00:02.027) 0:13:14.816 ***** 2026-02-05 04:53:40.823649 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:53:40.823670 | orchestrator | 2026-02-05 04:53:40.823679 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-05 04:53:40.823688 | orchestrator | Thursday 05 February 2026 04:53:31 +0000 (0:00:01.170) 0:13:15.987 ***** 2026-02-05 04:53:40.823696 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:53:40.823705 | orchestrator | 2026-02-05 04:53:40.823713 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-05 04:53:40.823722 | orchestrator | Thursday 05 February 2026 04:53:32 +0000 (0:00:01.144) 0:13:17.132 ***** 2026-02-05 04:53:40.823731 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:53:40.823739 | orchestrator | 2026-02-05 04:53:40.823748 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-05 04:53:40.823756 | orchestrator | Thursday 05 February 2026 04:53:34 +0000 (0:00:01.773) 0:13:18.905 ***** 2026-02-05 04:53:40.823765 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:53:40.823774 | orchestrator | 2026-02-05 04:53:40.823782 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-05 04:53:40.823791 | orchestrator | Thursday 05 February 2026 04:53:35 +0000 (0:00:01.489) 0:13:20.394 ***** 2026-02-05 04:53:40.823800 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.823808 | orchestrator | 2026-02-05 04:53:40.823817 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-05 04:53:40.823832 | orchestrator | Thursday 05 February 2026 04:53:36 +0000 (0:00:00.761) 0:13:21.156 ***** 2026-02-05 04:53:40.823841 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-02-05 04:53:40.823850 | orchestrator | 2026-02-05 04:53:40.823858 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-05 04:53:40.823867 | orchestrator | Thursday 05 February 2026 04:53:37 +0000 (0:00:01.080) 0:13:22.236 ***** 2026-02-05 04:53:40.823876 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.823884 | orchestrator | 2026-02-05 04:53:40.823893 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-05 04:53:40.823901 | orchestrator | Thursday 05 February 2026 04:53:38 +0000 (0:00:01.114) 0:13:23.351 ***** 2026-02-05 04:53:40.823910 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:53:40.823918 | orchestrator | 2026-02-05 04:53:40.823927 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-05 04:53:40.823936 | orchestrator | Thursday 05 February 2026 04:53:39 +0000 (0:00:01.099) 0:13:24.450 ***** 2026-02-05 04:53:40.823944 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-02-05 04:53:40.823953 | orchestrator | 2026-02-05 04:53:40.823968 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-05 04:54:48.622600 | orchestrator | Thursday 05 February 2026 04:53:40 +0000 (0:00:01.180) 0:13:25.630 ***** 2026-02-05 04:54:48.622729 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:54:48.622759 | orchestrator | 2026-02-05 04:54:48.622780 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-05 04:54:48.622897 | orchestrator | Thursday 05 February 2026 04:53:43 +0000 (0:00:02.239) 0:13:27.870 ***** 2026-02-05 04:54:48.622920 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:54:48.622940 | orchestrator | 2026-02-05 04:54:48.622959 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-05 04:54:48.622978 | orchestrator | Thursday 05 February 2026 04:53:44 +0000 (0:00:01.896) 0:13:29.766 ***** 2026-02-05 04:54:48.622996 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:54:48.623015 | orchestrator | 2026-02-05 04:54:48.623033 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-05 04:54:48.623052 | orchestrator | Thursday 05 February 2026 04:53:47 +0000 (0:00:02.449) 0:13:32.216 ***** 2026-02-05 04:54:48.623066 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:54:48.623077 | orchestrator | 2026-02-05 04:54:48.623089 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-05 04:54:48.623146 | orchestrator | Thursday 05 February 2026 04:53:50 +0000 (0:00:02.799) 0:13:35.015 ***** 2026-02-05 04:54:48.623166 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-02-05 04:54:48.623180 | orchestrator | 2026-02-05 04:54:48.623193 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-05 04:54:48.623206 | orchestrator | Thursday 05 February 2026 04:53:51 +0000 (0:00:01.124) 0:13:36.140 ***** 2026-02-05 04:54:48.623219 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-05 04:54:48.623233 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:54:48.623245 | orchestrator | 2026-02-05 04:54:48.623258 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-05 04:54:48.623270 | orchestrator | Thursday 05 February 2026 04:54:14 +0000 (0:00:22.891) 0:13:59.031 ***** 2026-02-05 04:54:48.623283 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:54:48.623296 | orchestrator | 2026-02-05 04:54:48.623309 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-05 04:54:48.623322 | orchestrator | Thursday 05 February 2026 04:54:16 +0000 (0:00:02.735) 0:14:01.767 ***** 2026-02-05 04:54:48.623334 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:54:48.623347 | orchestrator | 2026-02-05 04:54:48.623360 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-05 04:54:48.623373 | orchestrator | Thursday 05 February 2026 04:54:17 +0000 (0:00:00.773) 0:14:02.540 ***** 2026-02-05 04:54:48.623414 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-05 04:54:48.623432 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-05 04:54:48.623446 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-05 04:54:48.623459 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-05 04:54:48.623474 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-05 04:54:48.623508 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}])  2026-02-05 04:54:48.623522 | orchestrator | 2026-02-05 04:54:48.623533 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-05 04:54:48.623544 | orchestrator | Thursday 05 February 2026 04:54:27 +0000 (0:00:09.849) 0:14:12.390 ***** 2026-02-05 04:54:48.623555 | orchestrator | changed: [testbed-node-1] 2026-02-05 04:54:48.623566 | orchestrator | 2026-02-05 04:54:48.623576 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 04:54:48.623587 | orchestrator | Thursday 05 February 2026 04:54:29 +0000 (0:00:02.160) 0:14:14.551 ***** 2026-02-05 04:54:48.623598 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 04:54:48.623608 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-05 04:54:48.623619 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-05 04:54:48.623630 | orchestrator | 2026-02-05 04:54:48.623641 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 04:54:48.623658 | orchestrator | Thursday 05 February 2026 04:54:31 +0000 (0:00:01.812) 0:14:16.364 ***** 2026-02-05 04:54:48.623669 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 04:54:48.623681 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 04:54:48.623692 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 04:54:48.623702 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:54:48.623721 | orchestrator | 2026-02-05 04:54:48.623732 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-05 04:54:48.623743 | orchestrator | Thursday 05 February 2026 04:54:32 +0000 (0:00:01.023) 0:14:17.387 ***** 2026-02-05 04:54:48.623754 | orchestrator | skipping: [testbed-node-1] 2026-02-05 04:54:48.623765 | orchestrator | 2026-02-05 04:54:48.623776 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-05 04:54:48.623786 | orchestrator | Thursday 05 February 2026 04:54:33 +0000 (0:00:00.799) 0:14:18.187 ***** 2026-02-05 04:54:48.623797 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:54:48.623808 | orchestrator | 2026-02-05 04:54:48.623818 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-05 04:54:48.623829 | orchestrator | 2026-02-05 04:54:48.623840 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-05 04:54:48.623851 | orchestrator | Thursday 05 February 2026 04:54:35 +0000 (0:00:02.113) 0:14:20.300 ***** 2026-02-05 04:54:48.623862 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:54:48.623872 | orchestrator | 2026-02-05 04:54:48.623883 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-05 04:54:48.623894 | orchestrator | Thursday 05 February 2026 04:54:36 +0000 (0:00:01.077) 0:14:21.377 ***** 2026-02-05 04:54:48.623905 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:54:48.623915 | orchestrator | 2026-02-05 04:54:48.623926 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-05 04:54:48.623937 | orchestrator | Thursday 05 February 2026 04:54:37 +0000 (0:00:00.774) 0:14:22.151 ***** 2026-02-05 04:54:48.623948 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:54:48.623958 | orchestrator | 2026-02-05 04:54:48.623969 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-05 04:54:48.623980 | orchestrator | Thursday 05 February 2026 04:54:38 +0000 (0:00:00.838) 0:14:22.990 ***** 2026-02-05 04:54:48.623991 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:54:48.624001 | orchestrator | 2026-02-05 04:54:48.624012 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 04:54:48.624023 | orchestrator | Thursday 05 February 2026 04:54:38 +0000 (0:00:00.815) 0:14:23.806 ***** 2026-02-05 04:54:48.624033 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-05 04:54:48.624044 | orchestrator | 2026-02-05 04:54:48.624055 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 04:54:48.624066 | orchestrator | Thursday 05 February 2026 04:54:40 +0000 (0:00:01.101) 0:14:24.907 ***** 2026-02-05 04:54:48.624076 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:54:48.624087 | orchestrator | 2026-02-05 04:54:48.624098 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 04:54:48.624108 | orchestrator | Thursday 05 February 2026 04:54:41 +0000 (0:00:01.471) 0:14:26.379 ***** 2026-02-05 04:54:48.624168 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:54:48.624180 | orchestrator | 2026-02-05 04:54:48.624191 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 04:54:48.624202 | orchestrator | Thursday 05 February 2026 04:54:42 +0000 (0:00:01.104) 0:14:27.483 ***** 2026-02-05 04:54:48.624213 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:54:48.624223 | orchestrator | 2026-02-05 04:54:48.624234 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 04:54:48.624245 | orchestrator | Thursday 05 February 2026 04:54:44 +0000 (0:00:01.423) 0:14:28.907 ***** 2026-02-05 04:54:48.624256 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:54:48.624266 | orchestrator | 2026-02-05 04:54:48.624277 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 04:54:48.624288 | orchestrator | Thursday 05 February 2026 04:54:45 +0000 (0:00:01.113) 0:14:30.020 ***** 2026-02-05 04:54:48.624299 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:54:48.624309 | orchestrator | 2026-02-05 04:54:48.624320 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 04:54:48.624339 | orchestrator | Thursday 05 February 2026 04:54:46 +0000 (0:00:01.131) 0:14:31.152 ***** 2026-02-05 04:54:48.624350 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:54:48.624360 | orchestrator | 2026-02-05 04:54:48.624371 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 04:54:48.624382 | orchestrator | Thursday 05 February 2026 04:54:47 +0000 (0:00:01.108) 0:14:32.261 ***** 2026-02-05 04:54:48.624393 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:54:48.624403 | orchestrator | 2026-02-05 04:54:48.624414 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 04:54:48.624433 | orchestrator | Thursday 05 February 2026 04:54:48 +0000 (0:00:01.163) 0:14:33.424 ***** 2026-02-05 04:55:12.000721 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:55:12.000811 | orchestrator | 2026-02-05 04:55:12.000823 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 04:55:12.000836 | orchestrator | Thursday 05 February 2026 04:54:49 +0000 (0:00:01.135) 0:14:34.560 ***** 2026-02-05 04:55:12.000847 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 04:55:12.000859 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:55:12.000870 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 04:55:12.000881 | orchestrator | 2026-02-05 04:55:12.000893 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 04:55:12.000905 | orchestrator | Thursday 05 February 2026 04:54:51 +0000 (0:00:01.649) 0:14:36.209 ***** 2026-02-05 04:55:12.000912 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:55:12.000919 | orchestrator | 2026-02-05 04:55:12.000925 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 04:55:12.000960 | orchestrator | Thursday 05 February 2026 04:54:52 +0000 (0:00:01.225) 0:14:37.435 ***** 2026-02-05 04:55:12.000968 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 04:55:12.000975 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:55:12.000981 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 04:55:12.000988 | orchestrator | 2026-02-05 04:55:12.000995 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 04:55:12.001002 | orchestrator | Thursday 05 February 2026 04:54:55 +0000 (0:00:02.794) 0:14:40.229 ***** 2026-02-05 04:55:12.001010 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 04:55:12.001017 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 04:55:12.001024 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 04:55:12.001031 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:12.001037 | orchestrator | 2026-02-05 04:55:12.001044 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 04:55:12.001050 | orchestrator | Thursday 05 February 2026 04:54:56 +0000 (0:00:01.334) 0:14:41.563 ***** 2026-02-05 04:55:12.001058 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 04:55:12.001069 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 04:55:12.001076 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 04:55:12.001083 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:12.001089 | orchestrator | 2026-02-05 04:55:12.001096 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 04:55:12.001154 | orchestrator | Thursday 05 February 2026 04:54:58 +0000 (0:00:01.721) 0:14:43.285 ***** 2026-02-05 04:55:12.001165 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:12.001176 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:12.001183 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:12.001190 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:12.001197 | orchestrator | 2026-02-05 04:55:12.001204 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 04:55:12.001210 | orchestrator | Thursday 05 February 2026 04:54:59 +0000 (0:00:01.127) 0:14:44.413 ***** 2026-02-05 04:55:12.001232 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 04:54:53.142916', 'end': '2026-02-05 04:54:53.189018', 'delta': '0:00:00.046102', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 04:55:12.001246 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 04:54:53.708637', 'end': '2026-02-05 04:54:53.752575', 'delta': '0:00:00.043938', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 04:55:12.001253 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '458f6feaf079', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 04:54:54.499662', 'end': '2026-02-05 04:54:54.548674', 'delta': '0:00:00.049012', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['458f6feaf079'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 04:55:12.001266 | orchestrator | 2026-02-05 04:55:12.001274 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 04:55:12.001280 | orchestrator | Thursday 05 February 2026 04:55:00 +0000 (0:00:01.192) 0:14:45.606 ***** 2026-02-05 04:55:12.001287 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:55:12.001294 | orchestrator | 2026-02-05 04:55:12.001300 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 04:55:12.001307 | orchestrator | Thursday 05 February 2026 04:55:02 +0000 (0:00:01.228) 0:14:46.834 ***** 2026-02-05 04:55:12.001314 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:12.001320 | orchestrator | 2026-02-05 04:55:12.001327 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 04:55:12.001334 | orchestrator | Thursday 05 February 2026 04:55:03 +0000 (0:00:01.282) 0:14:48.117 ***** 2026-02-05 04:55:12.001340 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:55:12.001347 | orchestrator | 2026-02-05 04:55:12.001354 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 04:55:12.001360 | orchestrator | Thursday 05 February 2026 04:55:04 +0000 (0:00:01.148) 0:14:49.265 ***** 2026-02-05 04:55:12.001367 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 04:55:12.001373 | orchestrator | 2026-02-05 04:55:12.001380 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 04:55:12.001386 | orchestrator | Thursday 05 February 2026 04:55:06 +0000 (0:00:02.027) 0:14:51.293 ***** 2026-02-05 04:55:12.001393 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:55:12.001399 | orchestrator | 2026-02-05 04:55:12.001406 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 04:55:12.001413 | orchestrator | Thursday 05 February 2026 04:55:07 +0000 (0:00:01.104) 0:14:52.397 ***** 2026-02-05 04:55:12.001419 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:12.001426 | orchestrator | 2026-02-05 04:55:12.001432 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 04:55:12.001439 | orchestrator | Thursday 05 February 2026 04:55:08 +0000 (0:00:01.111) 0:14:53.509 ***** 2026-02-05 04:55:12.001446 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:12.001452 | orchestrator | 2026-02-05 04:55:12.001459 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 04:55:12.001466 | orchestrator | Thursday 05 February 2026 04:55:09 +0000 (0:00:01.132) 0:14:54.642 ***** 2026-02-05 04:55:12.001472 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:12.001479 | orchestrator | 2026-02-05 04:55:12.001485 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 04:55:12.001492 | orchestrator | Thursday 05 February 2026 04:55:10 +0000 (0:00:01.097) 0:14:55.739 ***** 2026-02-05 04:55:12.001498 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:12.001505 | orchestrator | 2026-02-05 04:55:12.001512 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 04:55:12.001522 | orchestrator | Thursday 05 February 2026 04:55:11 +0000 (0:00:01.068) 0:14:56.808 ***** 2026-02-05 04:55:19.934606 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:19.934700 | orchestrator | 2026-02-05 04:55:19.934714 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 04:55:19.934724 | orchestrator | Thursday 05 February 2026 04:55:13 +0000 (0:00:01.081) 0:14:57.890 ***** 2026-02-05 04:55:19.934733 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:19.934741 | orchestrator | 2026-02-05 04:55:19.934750 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 04:55:19.934758 | orchestrator | Thursday 05 February 2026 04:55:14 +0000 (0:00:01.082) 0:14:58.972 ***** 2026-02-05 04:55:19.934766 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:19.934773 | orchestrator | 2026-02-05 04:55:19.934781 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 04:55:19.934790 | orchestrator | Thursday 05 February 2026 04:55:15 +0000 (0:00:01.081) 0:15:00.054 ***** 2026-02-05 04:55:19.934818 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:19.934827 | orchestrator | 2026-02-05 04:55:19.934847 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 04:55:19.934856 | orchestrator | Thursday 05 February 2026 04:55:16 +0000 (0:00:01.086) 0:15:01.140 ***** 2026-02-05 04:55:19.934864 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:19.934872 | orchestrator | 2026-02-05 04:55:19.934880 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 04:55:19.934888 | orchestrator | Thursday 05 February 2026 04:55:17 +0000 (0:00:01.119) 0:15:02.260 ***** 2026-02-05 04:55:19.934897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:55:19.934909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:55:19.934917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:55:19.934927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 04:55:19.934938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:55:19.934946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:55:19.934981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:55:19.935008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48b9971a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 04:55:19.935034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:55:19.935043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 04:55:19.935051 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:19.935059 | orchestrator | 2026-02-05 04:55:19.935067 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 04:55:19.935075 | orchestrator | Thursday 05 February 2026 04:55:18 +0000 (0:00:01.280) 0:15:03.540 ***** 2026-02-05 04:55:19.935084 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:19.935101 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:27.434002 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:27.434253 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:27.434302 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:27.434322 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:27.434341 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:27.434505 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48b9971a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:27.434573 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:27.434596 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 04:55:27.434617 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:27.434639 | orchestrator | 2026-02-05 04:55:27.434662 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 04:55:27.434685 | orchestrator | Thursday 05 February 2026 04:55:19 +0000 (0:00:01.209) 0:15:04.750 ***** 2026-02-05 04:55:27.434704 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:55:27.434724 | orchestrator | 2026-02-05 04:55:27.434744 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 04:55:27.434765 | orchestrator | Thursday 05 February 2026 04:55:21 +0000 (0:00:01.478) 0:15:06.229 ***** 2026-02-05 04:55:27.434783 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:55:27.434813 | orchestrator | 2026-02-05 04:55:27.434832 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 04:55:27.434851 | orchestrator | Thursday 05 February 2026 04:55:22 +0000 (0:00:01.093) 0:15:07.322 ***** 2026-02-05 04:55:27.434869 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:55:27.434887 | orchestrator | 2026-02-05 04:55:27.434906 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 04:55:27.434924 | orchestrator | Thursday 05 February 2026 04:55:23 +0000 (0:00:01.500) 0:15:08.823 ***** 2026-02-05 04:55:27.434943 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:27.434961 | orchestrator | 2026-02-05 04:55:27.434980 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 04:55:27.434997 | orchestrator | Thursday 05 February 2026 04:55:25 +0000 (0:00:01.111) 0:15:09.935 ***** 2026-02-05 04:55:27.435015 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:27.435034 | orchestrator | 2026-02-05 04:55:27.435051 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 04:55:27.435070 | orchestrator | Thursday 05 February 2026 04:55:26 +0000 (0:00:01.200) 0:15:11.136 ***** 2026-02-05 04:55:27.435089 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:55:27.435140 | orchestrator | 2026-02-05 04:55:27.435161 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 04:55:27.435197 | orchestrator | Thursday 05 February 2026 04:55:27 +0000 (0:00:01.108) 0:15:12.244 ***** 2026-02-05 04:56:06.222678 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-05 04:56:06.222773 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-05 04:56:06.222780 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 04:56:06.222785 | orchestrator | 2026-02-05 04:56:06.222791 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 04:56:06.222797 | orchestrator | Thursday 05 February 2026 04:55:29 +0000 (0:00:01.887) 0:15:14.132 ***** 2026-02-05 04:56:06.222814 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 04:56:06.222819 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 04:56:06.222824 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 04:56:06.222828 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.222832 | orchestrator | 2026-02-05 04:56:06.222837 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 04:56:06.222842 | orchestrator | Thursday 05 February 2026 04:55:30 +0000 (0:00:01.176) 0:15:15.309 ***** 2026-02-05 04:56:06.222846 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.222850 | orchestrator | 2026-02-05 04:56:06.222854 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 04:56:06.222858 | orchestrator | Thursday 05 February 2026 04:55:31 +0000 (0:00:01.136) 0:15:16.446 ***** 2026-02-05 04:56:06.222862 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 04:56:06.222867 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:56:06.222871 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 04:56:06.222875 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 04:56:06.222880 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 04:56:06.222884 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 04:56:06.222888 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 04:56:06.222892 | orchestrator | 2026-02-05 04:56:06.222896 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 04:56:06.222900 | orchestrator | Thursday 05 February 2026 04:55:33 +0000 (0:00:02.052) 0:15:18.499 ***** 2026-02-05 04:56:06.222904 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 04:56:06.222924 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 04:56:06.222929 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 04:56:06.222933 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 04:56:06.222937 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 04:56:06.222941 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 04:56:06.222945 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 04:56:06.222949 | orchestrator | 2026-02-05 04:56:06.222953 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-05 04:56:06.222957 | orchestrator | Thursday 05 February 2026 04:55:36 +0000 (0:00:02.421) 0:15:20.921 ***** 2026-02-05 04:56:06.222961 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.222966 | orchestrator | 2026-02-05 04:56:06.222970 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-05 04:56:06.222974 | orchestrator | Thursday 05 February 2026 04:55:36 +0000 (0:00:00.851) 0:15:21.772 ***** 2026-02-05 04:56:06.222978 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.222982 | orchestrator | 2026-02-05 04:56:06.222986 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-05 04:56:06.222990 | orchestrator | Thursday 05 February 2026 04:55:37 +0000 (0:00:00.889) 0:15:22.662 ***** 2026-02-05 04:56:06.222994 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.222998 | orchestrator | 2026-02-05 04:56:06.223003 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-05 04:56:06.223007 | orchestrator | Thursday 05 February 2026 04:55:38 +0000 (0:00:00.773) 0:15:23.436 ***** 2026-02-05 04:56:06.223011 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.223015 | orchestrator | 2026-02-05 04:56:06.223019 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-05 04:56:06.223023 | orchestrator | Thursday 05 February 2026 04:55:39 +0000 (0:00:00.841) 0:15:24.277 ***** 2026-02-05 04:56:06.223027 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.223031 | orchestrator | 2026-02-05 04:56:06.223036 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-05 04:56:06.223040 | orchestrator | Thursday 05 February 2026 04:55:40 +0000 (0:00:00.766) 0:15:25.044 ***** 2026-02-05 04:56:06.223044 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 04:56:06.223048 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 04:56:06.223052 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 04:56:06.223056 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.223060 | orchestrator | 2026-02-05 04:56:06.223064 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-05 04:56:06.223068 | orchestrator | Thursday 05 February 2026 04:55:41 +0000 (0:00:01.064) 0:15:26.108 ***** 2026-02-05 04:56:06.223072 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-05 04:56:06.223077 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-05 04:56:06.223137 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-05 04:56:06.223144 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-05 04:56:06.223148 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-05 04:56:06.223152 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-05 04:56:06.223156 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.223160 | orchestrator | 2026-02-05 04:56:06.223167 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-05 04:56:06.223172 | orchestrator | Thursday 05 February 2026 04:55:42 +0000 (0:00:01.568) 0:15:27.677 ***** 2026-02-05 04:56:06.223180 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 04:56:06.223184 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 04:56:06.223188 | orchestrator | 2026-02-05 04:56:06.223192 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-05 04:56:06.223196 | orchestrator | Thursday 05 February 2026 04:55:46 +0000 (0:00:03.223) 0:15:30.901 ***** 2026-02-05 04:56:06.223201 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:56:06.223205 | orchestrator | 2026-02-05 04:56:06.223209 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 04:56:06.223213 | orchestrator | Thursday 05 February 2026 04:55:48 +0000 (0:00:02.161) 0:15:33.063 ***** 2026-02-05 04:56:06.223218 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-05 04:56:06.223223 | orchestrator | 2026-02-05 04:56:06.223228 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 04:56:06.223232 | orchestrator | Thursday 05 February 2026 04:55:49 +0000 (0:00:01.100) 0:15:34.164 ***** 2026-02-05 04:56:06.223237 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-05 04:56:06.223242 | orchestrator | 2026-02-05 04:56:06.223247 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 04:56:06.223252 | orchestrator | Thursday 05 February 2026 04:55:50 +0000 (0:00:01.207) 0:15:35.371 ***** 2026-02-05 04:56:06.223256 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:06.223262 | orchestrator | 2026-02-05 04:56:06.223266 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 04:56:06.223271 | orchestrator | Thursday 05 February 2026 04:55:52 +0000 (0:00:01.532) 0:15:36.903 ***** 2026-02-05 04:56:06.223276 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.223281 | orchestrator | 2026-02-05 04:56:06.223286 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 04:56:06.223291 | orchestrator | Thursday 05 February 2026 04:55:53 +0000 (0:00:01.118) 0:15:38.022 ***** 2026-02-05 04:56:06.223295 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.223300 | orchestrator | 2026-02-05 04:56:06.223305 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 04:56:06.223309 | orchestrator | Thursday 05 February 2026 04:55:54 +0000 (0:00:01.108) 0:15:39.130 ***** 2026-02-05 04:56:06.223314 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.223319 | orchestrator | 2026-02-05 04:56:06.223323 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 04:56:06.223328 | orchestrator | Thursday 05 February 2026 04:55:55 +0000 (0:00:01.136) 0:15:40.266 ***** 2026-02-05 04:56:06.223333 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:06.223338 | orchestrator | 2026-02-05 04:56:06.223343 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 04:56:06.223348 | orchestrator | Thursday 05 February 2026 04:55:56 +0000 (0:00:01.546) 0:15:41.813 ***** 2026-02-05 04:56:06.223353 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.223357 | orchestrator | 2026-02-05 04:56:06.223362 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 04:56:06.223367 | orchestrator | Thursday 05 February 2026 04:55:58 +0000 (0:00:01.151) 0:15:42.965 ***** 2026-02-05 04:56:06.223372 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.223377 | orchestrator | 2026-02-05 04:56:06.223381 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 04:56:06.223386 | orchestrator | Thursday 05 February 2026 04:55:59 +0000 (0:00:01.126) 0:15:44.091 ***** 2026-02-05 04:56:06.223391 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:06.223396 | orchestrator | 2026-02-05 04:56:06.223400 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 04:56:06.223405 | orchestrator | Thursday 05 February 2026 04:56:00 +0000 (0:00:01.578) 0:15:45.669 ***** 2026-02-05 04:56:06.223414 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:06.223419 | orchestrator | 2026-02-05 04:56:06.223424 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 04:56:06.223428 | orchestrator | Thursday 05 February 2026 04:56:02 +0000 (0:00:01.534) 0:15:47.204 ***** 2026-02-05 04:56:06.223433 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.223437 | orchestrator | 2026-02-05 04:56:06.223441 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 04:56:06.223445 | orchestrator | Thursday 05 February 2026 04:56:03 +0000 (0:00:00.770) 0:15:47.975 ***** 2026-02-05 04:56:06.223449 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:06.223453 | orchestrator | 2026-02-05 04:56:06.223458 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 04:56:06.223462 | orchestrator | Thursday 05 February 2026 04:56:03 +0000 (0:00:00.767) 0:15:48.743 ***** 2026-02-05 04:56:06.223466 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.223470 | orchestrator | 2026-02-05 04:56:06.223474 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 04:56:06.223478 | orchestrator | Thursday 05 February 2026 04:56:04 +0000 (0:00:00.758) 0:15:49.501 ***** 2026-02-05 04:56:06.223482 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:06.223486 | orchestrator | 2026-02-05 04:56:06.223491 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 04:56:06.223495 | orchestrator | Thursday 05 February 2026 04:56:05 +0000 (0:00:00.779) 0:15:50.281 ***** 2026-02-05 04:56:06.223502 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.365558 | orchestrator | 2026-02-05 04:56:46.365703 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 04:56:46.365732 | orchestrator | Thursday 05 February 2026 04:56:06 +0000 (0:00:00.752) 0:15:51.033 ***** 2026-02-05 04:56:46.365751 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.365770 | orchestrator | 2026-02-05 04:56:46.365787 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 04:56:46.365825 | orchestrator | Thursday 05 February 2026 04:56:06 +0000 (0:00:00.766) 0:15:51.800 ***** 2026-02-05 04:56:46.365842 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.365860 | orchestrator | 2026-02-05 04:56:46.365877 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 04:56:46.365896 | orchestrator | Thursday 05 February 2026 04:56:07 +0000 (0:00:00.757) 0:15:52.558 ***** 2026-02-05 04:56:46.365914 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:46.365934 | orchestrator | 2026-02-05 04:56:46.365953 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 04:56:46.365972 | orchestrator | Thursday 05 February 2026 04:56:08 +0000 (0:00:00.798) 0:15:53.357 ***** 2026-02-05 04:56:46.365992 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:46.366011 | orchestrator | 2026-02-05 04:56:46.366125 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 04:56:46.366142 | orchestrator | Thursday 05 February 2026 04:56:09 +0000 (0:00:00.771) 0:15:54.129 ***** 2026-02-05 04:56:46.366156 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:46.366169 | orchestrator | 2026-02-05 04:56:46.366181 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 04:56:46.366194 | orchestrator | Thursday 05 February 2026 04:56:10 +0000 (0:00:00.770) 0:15:54.899 ***** 2026-02-05 04:56:46.366207 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.366220 | orchestrator | 2026-02-05 04:56:46.366233 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 04:56:46.366245 | orchestrator | Thursday 05 February 2026 04:56:10 +0000 (0:00:00.800) 0:15:55.700 ***** 2026-02-05 04:56:46.366258 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.366271 | orchestrator | 2026-02-05 04:56:46.366284 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 04:56:46.366296 | orchestrator | Thursday 05 February 2026 04:56:11 +0000 (0:00:00.761) 0:15:56.461 ***** 2026-02-05 04:56:46.366335 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.366348 | orchestrator | 2026-02-05 04:56:46.366361 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 04:56:46.366374 | orchestrator | Thursday 05 February 2026 04:56:12 +0000 (0:00:00.752) 0:15:57.213 ***** 2026-02-05 04:56:46.366384 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.366395 | orchestrator | 2026-02-05 04:56:46.366406 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 04:56:46.366417 | orchestrator | Thursday 05 February 2026 04:56:13 +0000 (0:00:00.797) 0:15:58.011 ***** 2026-02-05 04:56:46.366435 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.366450 | orchestrator | 2026-02-05 04:56:46.366478 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 04:56:46.366497 | orchestrator | Thursday 05 February 2026 04:56:13 +0000 (0:00:00.773) 0:15:58.785 ***** 2026-02-05 04:56:46.366514 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.366531 | orchestrator | 2026-02-05 04:56:46.366548 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 04:56:46.366564 | orchestrator | Thursday 05 February 2026 04:56:14 +0000 (0:00:00.777) 0:15:59.562 ***** 2026-02-05 04:56:46.366581 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.366601 | orchestrator | 2026-02-05 04:56:46.366635 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 04:56:46.366654 | orchestrator | Thursday 05 February 2026 04:56:15 +0000 (0:00:00.751) 0:16:00.314 ***** 2026-02-05 04:56:46.366671 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.366688 | orchestrator | 2026-02-05 04:56:46.366705 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 04:56:46.366723 | orchestrator | Thursday 05 February 2026 04:56:16 +0000 (0:00:00.765) 0:16:01.080 ***** 2026-02-05 04:56:46.366739 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.366755 | orchestrator | 2026-02-05 04:56:46.366771 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 04:56:46.366788 | orchestrator | Thursday 05 February 2026 04:56:17 +0000 (0:00:00.758) 0:16:01.839 ***** 2026-02-05 04:56:46.366805 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.366821 | orchestrator | 2026-02-05 04:56:46.366838 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 04:56:46.366856 | orchestrator | Thursday 05 February 2026 04:56:17 +0000 (0:00:00.750) 0:16:02.589 ***** 2026-02-05 04:56:46.366873 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.366889 | orchestrator | 2026-02-05 04:56:46.366905 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 04:56:46.366922 | orchestrator | Thursday 05 February 2026 04:56:18 +0000 (0:00:00.798) 0:16:03.388 ***** 2026-02-05 04:56:46.366940 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.366956 | orchestrator | 2026-02-05 04:56:46.366971 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 04:56:46.366987 | orchestrator | Thursday 05 February 2026 04:56:19 +0000 (0:00:00.743) 0:16:04.132 ***** 2026-02-05 04:56:46.367004 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:46.367020 | orchestrator | 2026-02-05 04:56:46.367038 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 04:56:46.367056 | orchestrator | Thursday 05 February 2026 04:56:20 +0000 (0:00:01.581) 0:16:05.713 ***** 2026-02-05 04:56:46.367074 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:46.367124 | orchestrator | 2026-02-05 04:56:46.367144 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 04:56:46.367162 | orchestrator | Thursday 05 February 2026 04:56:22 +0000 (0:00:02.101) 0:16:07.815 ***** 2026-02-05 04:56:46.367180 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-05 04:56:46.367199 | orchestrator | 2026-02-05 04:56:46.367246 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 04:56:46.367286 | orchestrator | Thursday 05 February 2026 04:56:24 +0000 (0:00:01.113) 0:16:08.928 ***** 2026-02-05 04:56:46.367305 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.367322 | orchestrator | 2026-02-05 04:56:46.367339 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 04:56:46.367370 | orchestrator | Thursday 05 February 2026 04:56:25 +0000 (0:00:01.093) 0:16:10.022 ***** 2026-02-05 04:56:46.367389 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.367406 | orchestrator | 2026-02-05 04:56:46.367424 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 04:56:46.367441 | orchestrator | Thursday 05 February 2026 04:56:26 +0000 (0:00:01.102) 0:16:11.125 ***** 2026-02-05 04:56:46.367459 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 04:56:46.367477 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 04:56:46.367494 | orchestrator | 2026-02-05 04:56:46.367512 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 04:56:46.367529 | orchestrator | Thursday 05 February 2026 04:56:28 +0000 (0:00:02.416) 0:16:13.542 ***** 2026-02-05 04:56:46.367546 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:46.367562 | orchestrator | 2026-02-05 04:56:46.367579 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 04:56:46.367596 | orchestrator | Thursday 05 February 2026 04:56:30 +0000 (0:00:01.503) 0:16:15.045 ***** 2026-02-05 04:56:46.367613 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.367631 | orchestrator | 2026-02-05 04:56:46.367647 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 04:56:46.367665 | orchestrator | Thursday 05 February 2026 04:56:31 +0000 (0:00:01.229) 0:16:16.275 ***** 2026-02-05 04:56:46.367683 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.367699 | orchestrator | 2026-02-05 04:56:46.367717 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 04:56:46.367734 | orchestrator | Thursday 05 February 2026 04:56:32 +0000 (0:00:00.755) 0:16:17.031 ***** 2026-02-05 04:56:46.367753 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.367771 | orchestrator | 2026-02-05 04:56:46.367788 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 04:56:46.367804 | orchestrator | Thursday 05 February 2026 04:56:32 +0000 (0:00:00.771) 0:16:17.802 ***** 2026-02-05 04:56:46.367822 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-05 04:56:46.367839 | orchestrator | 2026-02-05 04:56:46.367856 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 04:56:46.367872 | orchestrator | Thursday 05 February 2026 04:56:34 +0000 (0:00:01.130) 0:16:18.933 ***** 2026-02-05 04:56:46.367890 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:46.367908 | orchestrator | 2026-02-05 04:56:46.367926 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 04:56:46.367943 | orchestrator | Thursday 05 February 2026 04:56:35 +0000 (0:00:01.730) 0:16:20.664 ***** 2026-02-05 04:56:46.367959 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 04:56:46.367976 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 04:56:46.367991 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 04:56:46.368008 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.368026 | orchestrator | 2026-02-05 04:56:46.368044 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 04:56:46.368063 | orchestrator | Thursday 05 February 2026 04:56:36 +0000 (0:00:01.130) 0:16:21.795 ***** 2026-02-05 04:56:46.368116 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.368138 | orchestrator | 2026-02-05 04:56:46.368155 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 04:56:46.368173 | orchestrator | Thursday 05 February 2026 04:56:38 +0000 (0:00:01.109) 0:16:22.905 ***** 2026-02-05 04:56:46.368211 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.368231 | orchestrator | 2026-02-05 04:56:46.368250 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 04:56:46.368269 | orchestrator | Thursday 05 February 2026 04:56:39 +0000 (0:00:01.128) 0:16:24.034 ***** 2026-02-05 04:56:46.368287 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.368305 | orchestrator | 2026-02-05 04:56:46.368321 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 04:56:46.368338 | orchestrator | Thursday 05 February 2026 04:56:40 +0000 (0:00:01.119) 0:16:25.153 ***** 2026-02-05 04:56:46.368354 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.368371 | orchestrator | 2026-02-05 04:56:46.368388 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 04:56:46.368406 | orchestrator | Thursday 05 February 2026 04:56:41 +0000 (0:00:01.149) 0:16:26.303 ***** 2026-02-05 04:56:46.368423 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:56:46.368441 | orchestrator | 2026-02-05 04:56:46.368459 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 04:56:46.368477 | orchestrator | Thursday 05 February 2026 04:56:42 +0000 (0:00:00.776) 0:16:27.080 ***** 2026-02-05 04:56:46.368493 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:46.368511 | orchestrator | 2026-02-05 04:56:46.368529 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 04:56:46.368546 | orchestrator | Thursday 05 February 2026 04:56:44 +0000 (0:00:02.262) 0:16:29.343 ***** 2026-02-05 04:56:46.368564 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:56:46.368582 | orchestrator | 2026-02-05 04:56:46.368601 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 04:56:46.368619 | orchestrator | Thursday 05 February 2026 04:56:45 +0000 (0:00:00.755) 0:16:30.098 ***** 2026-02-05 04:56:46.368636 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-05 04:56:46.368653 | orchestrator | 2026-02-05 04:56:46.368690 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 04:57:22.397351 | orchestrator | Thursday 05 February 2026 04:56:46 +0000 (0:00:01.072) 0:16:31.171 ***** 2026-02-05 04:57:22.397441 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397449 | orchestrator | 2026-02-05 04:57:22.397455 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 04:57:22.397488 | orchestrator | Thursday 05 February 2026 04:56:47 +0000 (0:00:01.108) 0:16:32.280 ***** 2026-02-05 04:57:22.397493 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397528 | orchestrator | 2026-02-05 04:57:22.397534 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 04:57:22.397539 | orchestrator | Thursday 05 February 2026 04:56:48 +0000 (0:00:01.142) 0:16:33.423 ***** 2026-02-05 04:57:22.397544 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397548 | orchestrator | 2026-02-05 04:57:22.397553 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 04:57:22.397557 | orchestrator | Thursday 05 February 2026 04:56:49 +0000 (0:00:01.157) 0:16:34.580 ***** 2026-02-05 04:57:22.397562 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397566 | orchestrator | 2026-02-05 04:57:22.397570 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 04:57:22.397575 | orchestrator | Thursday 05 February 2026 04:56:50 +0000 (0:00:01.138) 0:16:35.719 ***** 2026-02-05 04:57:22.397579 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397584 | orchestrator | 2026-02-05 04:57:22.397588 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 04:57:22.397592 | orchestrator | Thursday 05 February 2026 04:56:52 +0000 (0:00:01.112) 0:16:36.831 ***** 2026-02-05 04:57:22.397596 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397601 | orchestrator | 2026-02-05 04:57:22.397608 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 04:57:22.397633 | orchestrator | Thursday 05 February 2026 04:56:53 +0000 (0:00:01.132) 0:16:37.964 ***** 2026-02-05 04:57:22.397640 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397646 | orchestrator | 2026-02-05 04:57:22.397652 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 04:57:22.397659 | orchestrator | Thursday 05 February 2026 04:56:54 +0000 (0:00:01.117) 0:16:39.082 ***** 2026-02-05 04:57:22.397667 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397672 | orchestrator | 2026-02-05 04:57:22.397676 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 04:57:22.397680 | orchestrator | Thursday 05 February 2026 04:56:55 +0000 (0:00:01.116) 0:16:40.198 ***** 2026-02-05 04:57:22.397684 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:57:22.397689 | orchestrator | 2026-02-05 04:57:22.397693 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 04:57:22.397698 | orchestrator | Thursday 05 February 2026 04:56:56 +0000 (0:00:00.789) 0:16:40.988 ***** 2026-02-05 04:57:22.397702 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-05 04:57:22.397707 | orchestrator | 2026-02-05 04:57:22.397711 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 04:57:22.397715 | orchestrator | Thursday 05 February 2026 04:56:57 +0000 (0:00:01.126) 0:16:42.115 ***** 2026-02-05 04:57:22.397720 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-05 04:57:22.397724 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-05 04:57:22.397729 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-05 04:57:22.397733 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-05 04:57:22.397737 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-05 04:57:22.397741 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-05 04:57:22.397745 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-05 04:57:22.397749 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-05 04:57:22.397754 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 04:57:22.397758 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 04:57:22.397762 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 04:57:22.397766 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 04:57:22.397770 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 04:57:22.397775 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 04:57:22.397779 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-05 04:57:22.397783 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-05 04:57:22.397787 | orchestrator | 2026-02-05 04:57:22.397791 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 04:57:22.397795 | orchestrator | Thursday 05 February 2026 04:57:03 +0000 (0:00:06.601) 0:16:48.717 ***** 2026-02-05 04:57:22.397799 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397803 | orchestrator | 2026-02-05 04:57:22.397808 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 04:57:22.397812 | orchestrator | Thursday 05 February 2026 04:57:04 +0000 (0:00:00.745) 0:16:49.462 ***** 2026-02-05 04:57:22.397816 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397820 | orchestrator | 2026-02-05 04:57:22.397824 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 04:57:22.397828 | orchestrator | Thursday 05 February 2026 04:57:05 +0000 (0:00:00.813) 0:16:50.276 ***** 2026-02-05 04:57:22.397833 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397837 | orchestrator | 2026-02-05 04:57:22.397841 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 04:57:22.397845 | orchestrator | Thursday 05 February 2026 04:57:06 +0000 (0:00:00.773) 0:16:51.050 ***** 2026-02-05 04:57:22.397853 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397858 | orchestrator | 2026-02-05 04:57:22.397862 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 04:57:22.397879 | orchestrator | Thursday 05 February 2026 04:57:06 +0000 (0:00:00.756) 0:16:51.806 ***** 2026-02-05 04:57:22.397883 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397887 | orchestrator | 2026-02-05 04:57:22.397891 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 04:57:22.397895 | orchestrator | Thursday 05 February 2026 04:57:07 +0000 (0:00:00.753) 0:16:52.559 ***** 2026-02-05 04:57:22.397904 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397908 | orchestrator | 2026-02-05 04:57:22.397913 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 04:57:22.397918 | orchestrator | Thursday 05 February 2026 04:57:08 +0000 (0:00:00.813) 0:16:53.372 ***** 2026-02-05 04:57:22.397922 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397927 | orchestrator | 2026-02-05 04:57:22.397932 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 04:57:22.397937 | orchestrator | Thursday 05 February 2026 04:57:09 +0000 (0:00:00.759) 0:16:54.132 ***** 2026-02-05 04:57:22.397942 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397946 | orchestrator | 2026-02-05 04:57:22.397951 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 04:57:22.397956 | orchestrator | Thursday 05 February 2026 04:57:10 +0000 (0:00:00.756) 0:16:54.889 ***** 2026-02-05 04:57:22.397960 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397965 | orchestrator | 2026-02-05 04:57:22.397970 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 04:57:22.397975 | orchestrator | Thursday 05 February 2026 04:57:10 +0000 (0:00:00.815) 0:16:55.704 ***** 2026-02-05 04:57:22.397980 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.397984 | orchestrator | 2026-02-05 04:57:22.397989 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 04:57:22.397994 | orchestrator | Thursday 05 February 2026 04:57:11 +0000 (0:00:00.776) 0:16:56.481 ***** 2026-02-05 04:57:22.397999 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.398004 | orchestrator | 2026-02-05 04:57:22.398008 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 04:57:22.398013 | orchestrator | Thursday 05 February 2026 04:57:12 +0000 (0:00:00.753) 0:16:57.234 ***** 2026-02-05 04:57:22.398050 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.398055 | orchestrator | 2026-02-05 04:57:22.398060 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 04:57:22.398065 | orchestrator | Thursday 05 February 2026 04:57:13 +0000 (0:00:00.754) 0:16:57.988 ***** 2026-02-05 04:57:22.398069 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.398074 | orchestrator | 2026-02-05 04:57:22.398100 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 04:57:22.398105 | orchestrator | Thursday 05 February 2026 04:57:14 +0000 (0:00:00.871) 0:16:58.860 ***** 2026-02-05 04:57:22.398110 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.398114 | orchestrator | 2026-02-05 04:57:22.398119 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 04:57:22.398125 | orchestrator | Thursday 05 February 2026 04:57:14 +0000 (0:00:00.770) 0:16:59.630 ***** 2026-02-05 04:57:22.398130 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.398134 | orchestrator | 2026-02-05 04:57:22.398139 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 04:57:22.398144 | orchestrator | Thursday 05 February 2026 04:57:15 +0000 (0:00:00.872) 0:17:00.502 ***** 2026-02-05 04:57:22.398149 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.398154 | orchestrator | 2026-02-05 04:57:22.398158 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 04:57:22.398168 | orchestrator | Thursday 05 February 2026 04:57:16 +0000 (0:00:00.790) 0:17:01.293 ***** 2026-02-05 04:57:22.398172 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.398177 | orchestrator | 2026-02-05 04:57:22.398182 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 04:57:22.398189 | orchestrator | Thursday 05 February 2026 04:57:17 +0000 (0:00:00.755) 0:17:02.049 ***** 2026-02-05 04:57:22.398194 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.398199 | orchestrator | 2026-02-05 04:57:22.398204 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 04:57:22.398208 | orchestrator | Thursday 05 February 2026 04:57:17 +0000 (0:00:00.756) 0:17:02.805 ***** 2026-02-05 04:57:22.398213 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.398218 | orchestrator | 2026-02-05 04:57:22.398223 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 04:57:22.398228 | orchestrator | Thursday 05 February 2026 04:57:18 +0000 (0:00:00.783) 0:17:03.589 ***** 2026-02-05 04:57:22.398232 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.398237 | orchestrator | 2026-02-05 04:57:22.398242 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 04:57:22.398247 | orchestrator | Thursday 05 February 2026 04:57:19 +0000 (0:00:00.758) 0:17:04.347 ***** 2026-02-05 04:57:22.398252 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.398257 | orchestrator | 2026-02-05 04:57:22.398262 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 04:57:22.398267 | orchestrator | Thursday 05 February 2026 04:57:20 +0000 (0:00:00.780) 0:17:05.127 ***** 2026-02-05 04:57:22.398271 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-05 04:57:22.398275 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-05 04:57:22.398280 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-05 04:57:22.398285 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:57:22.398289 | orchestrator | 2026-02-05 04:57:22.398293 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 04:57:22.398297 | orchestrator | Thursday 05 February 2026 04:57:21 +0000 (0:00:01.038) 0:17:06.166 ***** 2026-02-05 04:57:22.398302 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-05 04:57:22.398309 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-05 04:58:49.478433 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-05 04:58:49.478515 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:58:49.478522 | orchestrator | 2026-02-05 04:58:49.478528 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 04:58:49.478545 | orchestrator | Thursday 05 February 2026 04:57:22 +0000 (0:00:01.039) 0:17:07.206 ***** 2026-02-05 04:58:49.478549 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-05 04:58:49.478554 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-05 04:58:49.478558 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-05 04:58:49.478562 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:58:49.478566 | orchestrator | 2026-02-05 04:58:49.478570 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 04:58:49.478574 | orchestrator | Thursday 05 February 2026 04:57:23 +0000 (0:00:01.025) 0:17:08.231 ***** 2026-02-05 04:58:49.478579 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:58:49.478583 | orchestrator | 2026-02-05 04:58:49.478586 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 04:58:49.478590 | orchestrator | Thursday 05 February 2026 04:57:24 +0000 (0:00:00.788) 0:17:09.020 ***** 2026-02-05 04:58:49.478595 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-05 04:58:49.478599 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:58:49.478603 | orchestrator | 2026-02-05 04:58:49.478606 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 04:58:49.478624 | orchestrator | Thursday 05 February 2026 04:57:25 +0000 (0:00:00.882) 0:17:09.903 ***** 2026-02-05 04:58:49.478628 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.478632 | orchestrator | 2026-02-05 04:58:49.478636 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-05 04:58:49.478639 | orchestrator | Thursday 05 February 2026 04:57:26 +0000 (0:00:01.368) 0:17:11.272 ***** 2026-02-05 04:58:49.478643 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.478647 | orchestrator | 2026-02-05 04:58:49.478651 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-05 04:58:49.478655 | orchestrator | Thursday 05 February 2026 04:57:27 +0000 (0:00:00.789) 0:17:12.061 ***** 2026-02-05 04:58:49.478659 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-02-05 04:58:49.478663 | orchestrator | 2026-02-05 04:58:49.478667 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-05 04:58:49.478671 | orchestrator | Thursday 05 February 2026 04:57:28 +0000 (0:00:01.136) 0:17:13.198 ***** 2026-02-05 04:58:49.478674 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.478678 | orchestrator | 2026-02-05 04:58:49.478682 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-05 04:58:49.478686 | orchestrator | Thursday 05 February 2026 04:57:31 +0000 (0:00:03.613) 0:17:16.811 ***** 2026-02-05 04:58:49.478689 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:58:49.478693 | orchestrator | 2026-02-05 04:58:49.478697 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-05 04:58:49.478701 | orchestrator | Thursday 05 February 2026 04:57:33 +0000 (0:00:01.141) 0:17:17.953 ***** 2026-02-05 04:58:49.478705 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.478708 | orchestrator | 2026-02-05 04:58:49.478712 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-05 04:58:49.478716 | orchestrator | Thursday 05 February 2026 04:57:34 +0000 (0:00:01.105) 0:17:19.059 ***** 2026-02-05 04:58:49.478720 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.478724 | orchestrator | 2026-02-05 04:58:49.478728 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-05 04:58:49.478732 | orchestrator | Thursday 05 February 2026 04:57:35 +0000 (0:00:01.243) 0:17:20.303 ***** 2026-02-05 04:58:49.478735 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:58:49.478739 | orchestrator | 2026-02-05 04:58:49.478743 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-05 04:58:49.478747 | orchestrator | Thursday 05 February 2026 04:57:37 +0000 (0:00:02.023) 0:17:22.326 ***** 2026-02-05 04:58:49.478750 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.478754 | orchestrator | 2026-02-05 04:58:49.478758 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-05 04:58:49.478762 | orchestrator | Thursday 05 February 2026 04:57:39 +0000 (0:00:01.597) 0:17:23.924 ***** 2026-02-05 04:58:49.478766 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.478770 | orchestrator | 2026-02-05 04:58:49.478773 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-05 04:58:49.478777 | orchestrator | Thursday 05 February 2026 04:57:40 +0000 (0:00:01.492) 0:17:25.416 ***** 2026-02-05 04:58:49.478781 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.478785 | orchestrator | 2026-02-05 04:58:49.478788 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-05 04:58:49.478792 | orchestrator | Thursday 05 February 2026 04:57:42 +0000 (0:00:01.477) 0:17:26.894 ***** 2026-02-05 04:58:49.478796 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 04:58:49.478800 | orchestrator | 2026-02-05 04:58:49.478804 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-05 04:58:49.478808 | orchestrator | Thursday 05 February 2026 04:57:43 +0000 (0:00:01.633) 0:17:28.528 ***** 2026-02-05 04:58:49.478811 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 04:58:49.478819 | orchestrator | 2026-02-05 04:58:49.478823 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-05 04:58:49.478826 | orchestrator | Thursday 05 February 2026 04:57:45 +0000 (0:00:01.599) 0:17:30.128 ***** 2026-02-05 04:58:49.478830 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 04:58:49.478836 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 04:58:49.478842 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-05 04:58:49.478848 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-05 04:58:49.478858 | orchestrator | 2026-02-05 04:58:49.478879 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-05 04:58:49.478886 | orchestrator | Thursday 05 February 2026 04:57:49 +0000 (0:00:03.865) 0:17:33.994 ***** 2026-02-05 04:58:49.478892 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:58:49.478897 | orchestrator | 2026-02-05 04:58:49.478903 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-05 04:58:49.478914 | orchestrator | Thursday 05 February 2026 04:57:51 +0000 (0:00:02.047) 0:17:36.041 ***** 2026-02-05 04:58:49.478920 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.478925 | orchestrator | 2026-02-05 04:58:49.478931 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-05 04:58:49.478937 | orchestrator | Thursday 05 February 2026 04:57:52 +0000 (0:00:01.114) 0:17:37.155 ***** 2026-02-05 04:58:49.478943 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.478950 | orchestrator | 2026-02-05 04:58:49.478956 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-05 04:58:49.478962 | orchestrator | Thursday 05 February 2026 04:57:53 +0000 (0:00:01.190) 0:17:38.346 ***** 2026-02-05 04:58:49.478968 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.478974 | orchestrator | 2026-02-05 04:58:49.478980 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-05 04:58:49.478986 | orchestrator | Thursday 05 February 2026 04:57:55 +0000 (0:00:02.057) 0:17:40.404 ***** 2026-02-05 04:58:49.478992 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.478999 | orchestrator | 2026-02-05 04:58:49.479006 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-05 04:58:49.479013 | orchestrator | Thursday 05 February 2026 04:57:57 +0000 (0:00:01.438) 0:17:41.843 ***** 2026-02-05 04:58:49.479019 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:58:49.479025 | orchestrator | 2026-02-05 04:58:49.479032 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-05 04:58:49.479038 | orchestrator | Thursday 05 February 2026 04:57:57 +0000 (0:00:00.788) 0:17:42.631 ***** 2026-02-05 04:58:49.479045 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-02-05 04:58:49.479052 | orchestrator | 2026-02-05 04:58:49.479058 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-05 04:58:49.479092 | orchestrator | Thursday 05 February 2026 04:57:58 +0000 (0:00:01.111) 0:17:43.742 ***** 2026-02-05 04:58:49.479099 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:58:49.479105 | orchestrator | 2026-02-05 04:58:49.479111 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-05 04:58:49.479118 | orchestrator | Thursday 05 February 2026 04:58:00 +0000 (0:00:01.114) 0:17:44.857 ***** 2026-02-05 04:58:49.479124 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:58:49.479129 | orchestrator | 2026-02-05 04:58:49.479135 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-05 04:58:49.479141 | orchestrator | Thursday 05 February 2026 04:58:01 +0000 (0:00:01.119) 0:17:45.977 ***** 2026-02-05 04:58:49.479147 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-02-05 04:58:49.479153 | orchestrator | 2026-02-05 04:58:49.479160 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-05 04:58:49.479166 | orchestrator | Thursday 05 February 2026 04:58:02 +0000 (0:00:01.132) 0:17:47.109 ***** 2026-02-05 04:58:49.479182 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.479188 | orchestrator | 2026-02-05 04:58:49.479194 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-05 04:58:49.479201 | orchestrator | Thursday 05 February 2026 04:58:04 +0000 (0:00:02.259) 0:17:49.369 ***** 2026-02-05 04:58:49.479207 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.479214 | orchestrator | 2026-02-05 04:58:49.479220 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-05 04:58:49.479226 | orchestrator | Thursday 05 February 2026 04:58:06 +0000 (0:00:01.997) 0:17:51.367 ***** 2026-02-05 04:58:49.479233 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.479239 | orchestrator | 2026-02-05 04:58:49.479244 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-05 04:58:49.479248 | orchestrator | Thursday 05 February 2026 04:58:09 +0000 (0:00:02.526) 0:17:53.893 ***** 2026-02-05 04:58:49.479253 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:58:49.479257 | orchestrator | 2026-02-05 04:58:49.479262 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-05 04:58:49.479266 | orchestrator | Thursday 05 February 2026 04:58:12 +0000 (0:00:02.948) 0:17:56.842 ***** 2026-02-05 04:58:49.479270 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-02-05 04:58:49.479275 | orchestrator | 2026-02-05 04:58:49.479279 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-05 04:58:49.479284 | orchestrator | Thursday 05 February 2026 04:58:13 +0000 (0:00:01.159) 0:17:58.002 ***** 2026-02-05 04:58:49.479288 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-05 04:58:49.479293 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.479297 | orchestrator | 2026-02-05 04:58:49.479302 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-05 04:58:49.479306 | orchestrator | Thursday 05 February 2026 04:58:36 +0000 (0:00:23.016) 0:18:21.019 ***** 2026-02-05 04:58:49.479310 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:58:49.479316 | orchestrator | 2026-02-05 04:58:49.479322 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-05 04:58:49.479332 | orchestrator | Thursday 05 February 2026 04:58:38 +0000 (0:00:02.738) 0:18:23.758 ***** 2026-02-05 04:58:49.479341 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:58:49.479347 | orchestrator | 2026-02-05 04:58:49.479353 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-05 04:58:49.479360 | orchestrator | Thursday 05 February 2026 04:58:39 +0000 (0:00:00.764) 0:18:24.523 ***** 2026-02-05 04:58:49.479378 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-05 04:59:24.628854 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-05 04:59:24.628957 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-05 04:59:24.628978 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-05 04:59:24.629020 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-05 04:59:24.629037 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__c0e7d865768d571d1c20c6519d6af1fe46c65279'}])  2026-02-05 04:59:24.629053 | orchestrator | 2026-02-05 04:59:24.629133 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-05 04:59:24.629150 | orchestrator | Thursday 05 February 2026 04:58:49 +0000 (0:00:09.764) 0:18:34.287 ***** 2026-02-05 04:59:24.629166 | orchestrator | changed: [testbed-node-2] 2026-02-05 04:59:24.629183 | orchestrator | 2026-02-05 04:59:24.629197 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 04:59:24.629212 | orchestrator | Thursday 05 February 2026 04:58:51 +0000 (0:00:02.206) 0:18:36.493 ***** 2026-02-05 04:59:24.629228 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 04:59:24.629243 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-05 04:59:24.629258 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-05 04:59:24.629273 | orchestrator | 2026-02-05 04:59:24.629287 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 04:59:24.629303 | orchestrator | Thursday 05 February 2026 04:58:53 +0000 (0:00:01.817) 0:18:38.311 ***** 2026-02-05 04:59:24.629318 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 04:59:24.629334 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 04:59:24.629349 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 04:59:24.629363 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:59:24.629378 | orchestrator | 2026-02-05 04:59:24.629394 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-05 04:59:24.629410 | orchestrator | Thursday 05 February 2026 04:58:54 +0000 (0:00:01.021) 0:18:39.333 ***** 2026-02-05 04:59:24.629425 | orchestrator | skipping: [testbed-node-2] 2026-02-05 04:59:24.629442 | orchestrator | 2026-02-05 04:59:24.629457 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-05 04:59:24.629472 | orchestrator | Thursday 05 February 2026 04:58:55 +0000 (0:00:00.811) 0:18:40.145 ***** 2026-02-05 04:59:24.629488 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:59:24.629504 | orchestrator | 2026-02-05 04:59:24.629520 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-02-05 04:59:24.629533 | orchestrator | 2026-02-05 04:59:24.629542 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-02-05 04:59:24.629550 | orchestrator | Thursday 05 February 2026 04:58:58 +0000 (0:00:02.843) 0:18:42.988 ***** 2026-02-05 04:59:24.629559 | orchestrator | ok: [testbed-node-0] 2026-02-05 04:59:24.629568 | orchestrator | ok: [testbed-node-1] 2026-02-05 04:59:24.629577 | orchestrator | ok: [testbed-node-2] 2026-02-05 04:59:24.629592 | orchestrator | 2026-02-05 04:59:24.629607 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-05 04:59:24.629621 | orchestrator | 2026-02-05 04:59:24.629637 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-05 04:59:24.629666 | orchestrator | Thursday 05 February 2026 04:58:59 +0000 (0:00:01.602) 0:18:44.591 ***** 2026-02-05 04:59:24.629682 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.629696 | orchestrator | 2026-02-05 04:59:24.629711 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 04:59:24.629758 | orchestrator | Thursday 05 February 2026 04:59:00 +0000 (0:00:01.207) 0:18:45.798 ***** 2026-02-05 04:59:24.629773 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.629788 | orchestrator | 2026-02-05 04:59:24.629804 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 04:59:24.629818 | orchestrator | Thursday 05 February 2026 04:59:02 +0000 (0:00:01.129) 0:18:46.928 ***** 2026-02-05 04:59:24.629833 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.629849 | orchestrator | 2026-02-05 04:59:24.629864 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 04:59:24.629880 | orchestrator | Thursday 05 February 2026 04:59:03 +0000 (0:00:01.142) 0:18:48.071 ***** 2026-02-05 04:59:24.629895 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.629909 | orchestrator | 2026-02-05 04:59:24.629924 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 04:59:24.629939 | orchestrator | Thursday 05 February 2026 04:59:04 +0000 (0:00:01.112) 0:18:49.184 ***** 2026-02-05 04:59:24.629953 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.629968 | orchestrator | 2026-02-05 04:59:24.629983 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 04:59:24.629999 | orchestrator | Thursday 05 February 2026 04:59:05 +0000 (0:00:01.118) 0:18:50.303 ***** 2026-02-05 04:59:24.630010 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630138 | orchestrator | 2026-02-05 04:59:24.630148 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 04:59:24.630157 | orchestrator | Thursday 05 February 2026 04:59:06 +0000 (0:00:01.099) 0:18:51.403 ***** 2026-02-05 04:59:24.630166 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630174 | orchestrator | 2026-02-05 04:59:24.630193 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 04:59:24.630202 | orchestrator | Thursday 05 February 2026 04:59:07 +0000 (0:00:01.130) 0:18:52.534 ***** 2026-02-05 04:59:24.630211 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630219 | orchestrator | 2026-02-05 04:59:24.630228 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 04:59:24.630237 | orchestrator | Thursday 05 February 2026 04:59:08 +0000 (0:00:01.130) 0:18:53.664 ***** 2026-02-05 04:59:24.630246 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630254 | orchestrator | 2026-02-05 04:59:24.630263 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 04:59:24.630272 | orchestrator | Thursday 05 February 2026 04:59:09 +0000 (0:00:01.115) 0:18:54.779 ***** 2026-02-05 04:59:24.630281 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630293 | orchestrator | 2026-02-05 04:59:24.630308 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 04:59:24.630322 | orchestrator | Thursday 05 February 2026 04:59:11 +0000 (0:00:01.136) 0:18:55.916 ***** 2026-02-05 04:59:24.630339 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630353 | orchestrator | 2026-02-05 04:59:24.630368 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 04:59:24.630380 | orchestrator | Thursday 05 February 2026 04:59:12 +0000 (0:00:01.123) 0:18:57.040 ***** 2026-02-05 04:59:24.630389 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630397 | orchestrator | 2026-02-05 04:59:24.630406 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 04:59:24.630415 | orchestrator | Thursday 05 February 2026 04:59:13 +0000 (0:00:01.114) 0:18:58.154 ***** 2026-02-05 04:59:24.630424 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630436 | orchestrator | 2026-02-05 04:59:24.630451 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 04:59:24.630477 | orchestrator | Thursday 05 February 2026 04:59:14 +0000 (0:00:01.155) 0:18:59.310 ***** 2026-02-05 04:59:24.630492 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630507 | orchestrator | 2026-02-05 04:59:24.630519 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 04:59:24.630528 | orchestrator | Thursday 05 February 2026 04:59:15 +0000 (0:00:01.101) 0:19:00.411 ***** 2026-02-05 04:59:24.630537 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630546 | orchestrator | 2026-02-05 04:59:24.630554 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 04:59:24.630563 | orchestrator | Thursday 05 February 2026 04:59:16 +0000 (0:00:01.121) 0:19:01.533 ***** 2026-02-05 04:59:24.630572 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630581 | orchestrator | 2026-02-05 04:59:24.630590 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 04:59:24.630598 | orchestrator | Thursday 05 February 2026 04:59:17 +0000 (0:00:01.127) 0:19:02.660 ***** 2026-02-05 04:59:24.630607 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630616 | orchestrator | 2026-02-05 04:59:24.630625 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 04:59:24.630634 | orchestrator | Thursday 05 February 2026 04:59:18 +0000 (0:00:01.116) 0:19:03.777 ***** 2026-02-05 04:59:24.630642 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630651 | orchestrator | 2026-02-05 04:59:24.630660 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 04:59:24.630668 | orchestrator | Thursday 05 February 2026 04:59:20 +0000 (0:00:01.145) 0:19:04.922 ***** 2026-02-05 04:59:24.630677 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630686 | orchestrator | 2026-02-05 04:59:24.630695 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 04:59:24.630704 | orchestrator | Thursday 05 February 2026 04:59:21 +0000 (0:00:01.148) 0:19:06.071 ***** 2026-02-05 04:59:24.630713 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630721 | orchestrator | 2026-02-05 04:59:24.630730 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 04:59:24.630739 | orchestrator | Thursday 05 February 2026 04:59:22 +0000 (0:00:01.087) 0:19:07.159 ***** 2026-02-05 04:59:24.630748 | orchestrator | skipping: [testbed-node-0] 2026-02-05 04:59:24.630756 | orchestrator | 2026-02-05 04:59:24.630765 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 04:59:24.630774 | orchestrator | Thursday 05 February 2026 04:59:23 +0000 (0:00:01.148) 0:19:08.308 ***** 2026-02-05 04:59:24.630802 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.997285 | orchestrator | 2026-02-05 05:00:07.997396 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:00:07.997418 | orchestrator | Thursday 05 February 2026 04:59:24 +0000 (0:00:01.132) 0:19:09.440 ***** 2026-02-05 05:00:07.997437 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.997454 | orchestrator | 2026-02-05 05:00:07.997471 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:00:07.997488 | orchestrator | Thursday 05 February 2026 04:59:25 +0000 (0:00:01.165) 0:19:10.605 ***** 2026-02-05 05:00:07.997505 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.997522 | orchestrator | 2026-02-05 05:00:07.997538 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:00:07.997556 | orchestrator | Thursday 05 February 2026 04:59:26 +0000 (0:00:01.098) 0:19:11.704 ***** 2026-02-05 05:00:07.997573 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.997585 | orchestrator | 2026-02-05 05:00:07.997595 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:00:07.997676 | orchestrator | Thursday 05 February 2026 04:59:27 +0000 (0:00:01.103) 0:19:12.808 ***** 2026-02-05 05:00:07.997687 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.997697 | orchestrator | 2026-02-05 05:00:07.997730 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:00:07.997759 | orchestrator | Thursday 05 February 2026 04:59:29 +0000 (0:00:01.121) 0:19:13.930 ***** 2026-02-05 05:00:07.997776 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.997798 | orchestrator | 2026-02-05 05:00:07.997820 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:00:07.997836 | orchestrator | Thursday 05 February 2026 04:59:30 +0000 (0:00:01.111) 0:19:15.041 ***** 2026-02-05 05:00:07.997853 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.997868 | orchestrator | 2026-02-05 05:00:07.997883 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:00:07.997898 | orchestrator | Thursday 05 February 2026 04:59:31 +0000 (0:00:01.211) 0:19:16.253 ***** 2026-02-05 05:00:07.997914 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.997930 | orchestrator | 2026-02-05 05:00:07.997947 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:00:07.997963 | orchestrator | Thursday 05 February 2026 04:59:32 +0000 (0:00:01.116) 0:19:17.370 ***** 2026-02-05 05:00:07.997980 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.997998 | orchestrator | 2026-02-05 05:00:07.998131 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:00:07.998165 | orchestrator | Thursday 05 February 2026 04:59:33 +0000 (0:00:01.126) 0:19:18.496 ***** 2026-02-05 05:00:07.998189 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998206 | orchestrator | 2026-02-05 05:00:07.998222 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:00:07.998236 | orchestrator | Thursday 05 February 2026 04:59:34 +0000 (0:00:01.111) 0:19:19.608 ***** 2026-02-05 05:00:07.998251 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998268 | orchestrator | 2026-02-05 05:00:07.998284 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:00:07.998299 | orchestrator | Thursday 05 February 2026 04:59:35 +0000 (0:00:01.153) 0:19:20.761 ***** 2026-02-05 05:00:07.998314 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998330 | orchestrator | 2026-02-05 05:00:07.998346 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:00:07.998363 | orchestrator | Thursday 05 February 2026 04:59:37 +0000 (0:00:01.136) 0:19:21.898 ***** 2026-02-05 05:00:07.998380 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998397 | orchestrator | 2026-02-05 05:00:07.998413 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:00:07.998429 | orchestrator | Thursday 05 February 2026 04:59:38 +0000 (0:00:01.145) 0:19:23.044 ***** 2026-02-05 05:00:07.998444 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998459 | orchestrator | 2026-02-05 05:00:07.998473 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:00:07.998487 | orchestrator | Thursday 05 February 2026 04:59:39 +0000 (0:00:01.112) 0:19:24.157 ***** 2026-02-05 05:00:07.998503 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998520 | orchestrator | 2026-02-05 05:00:07.998536 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:00:07.998553 | orchestrator | Thursday 05 February 2026 04:59:40 +0000 (0:00:01.136) 0:19:25.294 ***** 2026-02-05 05:00:07.998571 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998589 | orchestrator | 2026-02-05 05:00:07.998604 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:00:07.998622 | orchestrator | Thursday 05 February 2026 04:59:41 +0000 (0:00:01.138) 0:19:26.432 ***** 2026-02-05 05:00:07.998633 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998642 | orchestrator | 2026-02-05 05:00:07.998652 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:00:07.998662 | orchestrator | Thursday 05 February 2026 04:59:42 +0000 (0:00:01.119) 0:19:27.552 ***** 2026-02-05 05:00:07.998672 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998698 | orchestrator | 2026-02-05 05:00:07.998708 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:00:07.998719 | orchestrator | Thursday 05 February 2026 04:59:43 +0000 (0:00:01.112) 0:19:28.665 ***** 2026-02-05 05:00:07.998729 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998739 | orchestrator | 2026-02-05 05:00:07.998749 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:00:07.998759 | orchestrator | Thursday 05 February 2026 04:59:44 +0000 (0:00:01.103) 0:19:29.769 ***** 2026-02-05 05:00:07.998769 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998778 | orchestrator | 2026-02-05 05:00:07.998788 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:00:07.998798 | orchestrator | Thursday 05 February 2026 04:59:46 +0000 (0:00:01.122) 0:19:30.892 ***** 2026-02-05 05:00:07.998870 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998935 | orchestrator | 2026-02-05 05:00:07.998947 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:00:07.998957 | orchestrator | Thursday 05 February 2026 04:59:47 +0000 (0:00:01.112) 0:19:32.004 ***** 2026-02-05 05:00:07.998966 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.998976 | orchestrator | 2026-02-05 05:00:07.998986 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:00:07.998995 | orchestrator | Thursday 05 February 2026 04:59:48 +0000 (0:00:01.113) 0:19:33.118 ***** 2026-02-05 05:00:07.999005 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999014 | orchestrator | 2026-02-05 05:00:07.999024 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:00:07.999034 | orchestrator | Thursday 05 February 2026 04:59:49 +0000 (0:00:01.108) 0:19:34.226 ***** 2026-02-05 05:00:07.999043 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999053 | orchestrator | 2026-02-05 05:00:07.999132 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:00:07.999143 | orchestrator | Thursday 05 February 2026 04:59:50 +0000 (0:00:01.137) 0:19:35.364 ***** 2026-02-05 05:00:07.999152 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999162 | orchestrator | 2026-02-05 05:00:07.999172 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:00:07.999182 | orchestrator | Thursday 05 February 2026 04:59:51 +0000 (0:00:01.182) 0:19:36.546 ***** 2026-02-05 05:00:07.999191 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999201 | orchestrator | 2026-02-05 05:00:07.999211 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:00:07.999221 | orchestrator | Thursday 05 February 2026 04:59:52 +0000 (0:00:01.109) 0:19:37.656 ***** 2026-02-05 05:00:07.999231 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999240 | orchestrator | 2026-02-05 05:00:07.999250 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:00:07.999260 | orchestrator | Thursday 05 February 2026 04:59:54 +0000 (0:00:01.219) 0:19:38.876 ***** 2026-02-05 05:00:07.999270 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999291 | orchestrator | 2026-02-05 05:00:07.999301 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:00:07.999311 | orchestrator | Thursday 05 February 2026 04:59:55 +0000 (0:00:01.114) 0:19:39.991 ***** 2026-02-05 05:00:07.999322 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999351 | orchestrator | 2026-02-05 05:00:07.999368 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:00:07.999385 | orchestrator | Thursday 05 February 2026 04:59:56 +0000 (0:00:01.143) 0:19:41.135 ***** 2026-02-05 05:00:07.999401 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999416 | orchestrator | 2026-02-05 05:00:07.999430 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:00:07.999460 | orchestrator | Thursday 05 February 2026 04:59:57 +0000 (0:00:01.121) 0:19:42.256 ***** 2026-02-05 05:00:07.999476 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999492 | orchestrator | 2026-02-05 05:00:07.999508 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:00:07.999526 | orchestrator | Thursday 05 February 2026 04:59:58 +0000 (0:00:01.109) 0:19:43.366 ***** 2026-02-05 05:00:07.999543 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999560 | orchestrator | 2026-02-05 05:00:07.999577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:00:07.999594 | orchestrator | Thursday 05 February 2026 04:59:59 +0000 (0:00:01.151) 0:19:44.517 ***** 2026-02-05 05:00:07.999610 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999626 | orchestrator | 2026-02-05 05:00:07.999642 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:00:07.999660 | orchestrator | Thursday 05 February 2026 05:00:00 +0000 (0:00:01.124) 0:19:45.642 ***** 2026-02-05 05:00:07.999675 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 05:00:07.999688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 05:00:07.999696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-05 05:00:07.999704 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999712 | orchestrator | 2026-02-05 05:00:07.999720 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:00:07.999728 | orchestrator | Thursday 05 February 2026 05:00:02 +0000 (0:00:01.391) 0:19:47.033 ***** 2026-02-05 05:00:07.999736 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 05:00:07.999743 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 05:00:07.999751 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-05 05:00:07.999759 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999767 | orchestrator | 2026-02-05 05:00:07.999775 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:00:07.999783 | orchestrator | Thursday 05 February 2026 05:00:03 +0000 (0:00:01.726) 0:19:48.760 ***** 2026-02-05 05:00:07.999791 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 05:00:07.999799 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 05:00:07.999806 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-05 05:00:07.999814 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999822 | orchestrator | 2026-02-05 05:00:07.999830 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:00:07.999838 | orchestrator | Thursday 05 February 2026 05:00:05 +0000 (0:00:01.672) 0:19:50.433 ***** 2026-02-05 05:00:07.999846 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:07.999854 | orchestrator | 2026-02-05 05:00:07.999862 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:00:07.999870 | orchestrator | Thursday 05 February 2026 05:00:06 +0000 (0:00:01.130) 0:19:51.564 ***** 2026-02-05 05:00:07.999878 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-05 05:00:07.999899 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:40.989738 | orchestrator | 2026-02-05 05:00:40.989832 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:00:40.989844 | orchestrator | Thursday 05 February 2026 05:00:07 +0000 (0:00:01.242) 0:19:52.806 ***** 2026-02-05 05:00:40.989851 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:40.989860 | orchestrator | 2026-02-05 05:00:40.989867 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-05 05:00:40.989873 | orchestrator | Thursday 05 February 2026 05:00:09 +0000 (0:00:01.125) 0:19:53.932 ***** 2026-02-05 05:00:40.989880 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 05:00:40.989887 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 05:00:40.989894 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 05:00:40.989922 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:40.989929 | orchestrator | 2026-02-05 05:00:40.989935 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-05 05:00:40.989942 | orchestrator | Thursday 05 February 2026 05:00:10 +0000 (0:00:01.389) 0:19:55.322 ***** 2026-02-05 05:00:40.989948 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:40.989954 | orchestrator | 2026-02-05 05:00:40.989960 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-05 05:00:40.989967 | orchestrator | Thursday 05 February 2026 05:00:11 +0000 (0:00:01.085) 0:19:56.407 ***** 2026-02-05 05:00:40.989973 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:40.989979 | orchestrator | 2026-02-05 05:00:40.989985 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-05 05:00:40.989992 | orchestrator | Thursday 05 February 2026 05:00:12 +0000 (0:00:01.124) 0:19:57.532 ***** 2026-02-05 05:00:40.989998 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:40.990004 | orchestrator | 2026-02-05 05:00:40.990011 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-05 05:00:40.990072 | orchestrator | Thursday 05 February 2026 05:00:13 +0000 (0:00:01.143) 0:19:58.675 ***** 2026-02-05 05:00:40.990077 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:00:40.990084 | orchestrator | 2026-02-05 05:00:40.990090 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-05 05:00:40.990096 | orchestrator | 2026-02-05 05:00:40.990103 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-05 05:00:40.990109 | orchestrator | Thursday 05 February 2026 05:00:14 +0000 (0:00:00.965) 0:19:59.641 ***** 2026-02-05 05:00:40.990115 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990121 | orchestrator | 2026-02-05 05:00:40.990127 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:00:40.990133 | orchestrator | Thursday 05 February 2026 05:00:15 +0000 (0:00:00.792) 0:20:00.434 ***** 2026-02-05 05:00:40.990139 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990145 | orchestrator | 2026-02-05 05:00:40.990150 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:00:40.990155 | orchestrator | Thursday 05 February 2026 05:00:16 +0000 (0:00:00.806) 0:20:01.241 ***** 2026-02-05 05:00:40.990159 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990163 | orchestrator | 2026-02-05 05:00:40.990166 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:00:40.990170 | orchestrator | Thursday 05 February 2026 05:00:17 +0000 (0:00:00.757) 0:20:01.998 ***** 2026-02-05 05:00:40.990174 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990178 | orchestrator | 2026-02-05 05:00:40.990182 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:00:40.990186 | orchestrator | Thursday 05 February 2026 05:00:17 +0000 (0:00:00.780) 0:20:02.778 ***** 2026-02-05 05:00:40.990192 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990198 | orchestrator | 2026-02-05 05:00:40.990204 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:00:40.990210 | orchestrator | Thursday 05 February 2026 05:00:18 +0000 (0:00:00.793) 0:20:03.572 ***** 2026-02-05 05:00:40.990216 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990222 | orchestrator | 2026-02-05 05:00:40.990227 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:00:40.990233 | orchestrator | Thursday 05 February 2026 05:00:19 +0000 (0:00:00.749) 0:20:04.322 ***** 2026-02-05 05:00:40.990239 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990245 | orchestrator | 2026-02-05 05:00:40.990252 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:00:40.990258 | orchestrator | Thursday 05 February 2026 05:00:20 +0000 (0:00:00.780) 0:20:05.102 ***** 2026-02-05 05:00:40.990264 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990270 | orchestrator | 2026-02-05 05:00:40.990283 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:00:40.990287 | orchestrator | Thursday 05 February 2026 05:00:21 +0000 (0:00:00.776) 0:20:05.879 ***** 2026-02-05 05:00:40.990291 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990295 | orchestrator | 2026-02-05 05:00:40.990299 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:00:40.990303 | orchestrator | Thursday 05 February 2026 05:00:21 +0000 (0:00:00.760) 0:20:06.640 ***** 2026-02-05 05:00:40.990307 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990311 | orchestrator | 2026-02-05 05:00:40.990315 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:00:40.990318 | orchestrator | Thursday 05 February 2026 05:00:22 +0000 (0:00:00.765) 0:20:07.406 ***** 2026-02-05 05:00:40.990352 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990357 | orchestrator | 2026-02-05 05:00:40.990361 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:00:40.990364 | orchestrator | Thursday 05 February 2026 05:00:23 +0000 (0:00:00.783) 0:20:08.189 ***** 2026-02-05 05:00:40.990369 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990372 | orchestrator | 2026-02-05 05:00:40.990376 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:00:40.990382 | orchestrator | Thursday 05 February 2026 05:00:24 +0000 (0:00:00.788) 0:20:08.978 ***** 2026-02-05 05:00:40.990386 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990390 | orchestrator | 2026-02-05 05:00:40.990406 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:00:40.990410 | orchestrator | Thursday 05 February 2026 05:00:24 +0000 (0:00:00.762) 0:20:09.740 ***** 2026-02-05 05:00:40.990413 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990417 | orchestrator | 2026-02-05 05:00:40.990421 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:00:40.990425 | orchestrator | Thursday 05 February 2026 05:00:25 +0000 (0:00:00.788) 0:20:10.528 ***** 2026-02-05 05:00:40.990429 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990432 | orchestrator | 2026-02-05 05:00:40.990436 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:00:40.990440 | orchestrator | Thursday 05 February 2026 05:00:26 +0000 (0:00:00.772) 0:20:11.300 ***** 2026-02-05 05:00:40.990444 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990448 | orchestrator | 2026-02-05 05:00:40.990451 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:00:40.990455 | orchestrator | Thursday 05 February 2026 05:00:27 +0000 (0:00:00.770) 0:20:12.071 ***** 2026-02-05 05:00:40.990459 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990463 | orchestrator | 2026-02-05 05:00:40.990467 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:00:40.990470 | orchestrator | Thursday 05 February 2026 05:00:28 +0000 (0:00:00.818) 0:20:12.890 ***** 2026-02-05 05:00:40.990474 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990478 | orchestrator | 2026-02-05 05:00:40.990482 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:00:40.990486 | orchestrator | Thursday 05 February 2026 05:00:28 +0000 (0:00:00.766) 0:20:13.656 ***** 2026-02-05 05:00:40.990490 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990493 | orchestrator | 2026-02-05 05:00:40.990497 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:00:40.990501 | orchestrator | Thursday 05 February 2026 05:00:29 +0000 (0:00:00.753) 0:20:14.410 ***** 2026-02-05 05:00:40.990505 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990509 | orchestrator | 2026-02-05 05:00:40.990513 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:00:40.990517 | orchestrator | Thursday 05 February 2026 05:00:30 +0000 (0:00:00.815) 0:20:15.225 ***** 2026-02-05 05:00:40.990520 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990528 | orchestrator | 2026-02-05 05:00:40.990532 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:00:40.990535 | orchestrator | Thursday 05 February 2026 05:00:31 +0000 (0:00:00.805) 0:20:16.031 ***** 2026-02-05 05:00:40.990539 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990543 | orchestrator | 2026-02-05 05:00:40.990547 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:00:40.990551 | orchestrator | Thursday 05 February 2026 05:00:31 +0000 (0:00:00.767) 0:20:16.798 ***** 2026-02-05 05:00:40.990555 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990558 | orchestrator | 2026-02-05 05:00:40.990562 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:00:40.990566 | orchestrator | Thursday 05 February 2026 05:00:33 +0000 (0:00:01.290) 0:20:18.089 ***** 2026-02-05 05:00:40.990570 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990574 | orchestrator | 2026-02-05 05:00:40.990577 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:00:40.990581 | orchestrator | Thursday 05 February 2026 05:00:34 +0000 (0:00:00.778) 0:20:18.867 ***** 2026-02-05 05:00:40.990585 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990589 | orchestrator | 2026-02-05 05:00:40.990593 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:00:40.990596 | orchestrator | Thursday 05 February 2026 05:00:34 +0000 (0:00:00.768) 0:20:19.636 ***** 2026-02-05 05:00:40.990600 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990604 | orchestrator | 2026-02-05 05:00:40.990608 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:00:40.990612 | orchestrator | Thursday 05 February 2026 05:00:35 +0000 (0:00:00.783) 0:20:20.420 ***** 2026-02-05 05:00:40.990615 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990619 | orchestrator | 2026-02-05 05:00:40.990623 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:00:40.990627 | orchestrator | Thursday 05 February 2026 05:00:36 +0000 (0:00:00.749) 0:20:21.169 ***** 2026-02-05 05:00:40.990631 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990634 | orchestrator | 2026-02-05 05:00:40.990638 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:00:40.990642 | orchestrator | Thursday 05 February 2026 05:00:37 +0000 (0:00:00.760) 0:20:21.930 ***** 2026-02-05 05:00:40.990646 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990650 | orchestrator | 2026-02-05 05:00:40.990653 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:00:40.990657 | orchestrator | Thursday 05 February 2026 05:00:37 +0000 (0:00:00.764) 0:20:22.695 ***** 2026-02-05 05:00:40.990661 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990665 | orchestrator | 2026-02-05 05:00:40.990669 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:00:40.990673 | orchestrator | Thursday 05 February 2026 05:00:38 +0000 (0:00:00.788) 0:20:23.483 ***** 2026-02-05 05:00:40.990676 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990680 | orchestrator | 2026-02-05 05:00:40.990684 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:00:40.990688 | orchestrator | Thursday 05 February 2026 05:00:39 +0000 (0:00:00.767) 0:20:24.250 ***** 2026-02-05 05:00:40.990692 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990695 | orchestrator | 2026-02-05 05:00:40.990699 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:00:40.990703 | orchestrator | Thursday 05 February 2026 05:00:40 +0000 (0:00:00.747) 0:20:24.998 ***** 2026-02-05 05:00:40.990707 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:00:40.990711 | orchestrator | 2026-02-05 05:00:40.990721 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:01:10.877452 | orchestrator | Thursday 05 February 2026 05:00:40 +0000 (0:00:00.800) 0:20:25.799 ***** 2026-02-05 05:01:10.877573 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.877642 | orchestrator | 2026-02-05 05:01:10.877665 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:01:10.877682 | orchestrator | Thursday 05 February 2026 05:00:41 +0000 (0:00:00.798) 0:20:26.597 ***** 2026-02-05 05:01:10.877701 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.877718 | orchestrator | 2026-02-05 05:01:10.877735 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:01:10.877752 | orchestrator | Thursday 05 February 2026 05:00:42 +0000 (0:00:00.773) 0:20:27.371 ***** 2026-02-05 05:01:10.877770 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.877789 | orchestrator | 2026-02-05 05:01:10.877807 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:01:10.877824 | orchestrator | Thursday 05 February 2026 05:00:43 +0000 (0:00:00.812) 0:20:28.184 ***** 2026-02-05 05:01:10.877843 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.877862 | orchestrator | 2026-02-05 05:01:10.877880 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:01:10.877896 | orchestrator | Thursday 05 February 2026 05:00:44 +0000 (0:00:00.765) 0:20:28.950 ***** 2026-02-05 05:01:10.877907 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.877917 | orchestrator | 2026-02-05 05:01:10.877928 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:01:10.877939 | orchestrator | Thursday 05 February 2026 05:00:44 +0000 (0:00:00.778) 0:20:29.728 ***** 2026-02-05 05:01:10.877950 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.877961 | orchestrator | 2026-02-05 05:01:10.877972 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:01:10.877987 | orchestrator | Thursday 05 February 2026 05:00:45 +0000 (0:00:00.850) 0:20:30.579 ***** 2026-02-05 05:01:10.878001 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878095 | orchestrator | 2026-02-05 05:01:10.878112 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:01:10.878136 | orchestrator | Thursday 05 February 2026 05:00:46 +0000 (0:00:00.772) 0:20:31.351 ***** 2026-02-05 05:01:10.878150 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878164 | orchestrator | 2026-02-05 05:01:10.878177 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:01:10.878190 | orchestrator | Thursday 05 February 2026 05:00:47 +0000 (0:00:00.753) 0:20:32.105 ***** 2026-02-05 05:01:10.878203 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878215 | orchestrator | 2026-02-05 05:01:10.878229 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:01:10.878242 | orchestrator | Thursday 05 February 2026 05:00:48 +0000 (0:00:00.772) 0:20:32.878 ***** 2026-02-05 05:01:10.878255 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878268 | orchestrator | 2026-02-05 05:01:10.878280 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:01:10.878293 | orchestrator | Thursday 05 February 2026 05:00:48 +0000 (0:00:00.765) 0:20:33.643 ***** 2026-02-05 05:01:10.878307 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878320 | orchestrator | 2026-02-05 05:01:10.878333 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:01:10.878346 | orchestrator | Thursday 05 February 2026 05:00:49 +0000 (0:00:00.766) 0:20:34.410 ***** 2026-02-05 05:01:10.878359 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878372 | orchestrator | 2026-02-05 05:01:10.878385 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:01:10.878397 | orchestrator | Thursday 05 February 2026 05:00:50 +0000 (0:00:00.772) 0:20:35.183 ***** 2026-02-05 05:01:10.878408 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878419 | orchestrator | 2026-02-05 05:01:10.878430 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:01:10.878452 | orchestrator | Thursday 05 February 2026 05:00:51 +0000 (0:00:00.869) 0:20:36.052 ***** 2026-02-05 05:01:10.878463 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878474 | orchestrator | 2026-02-05 05:01:10.878485 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:01:10.878496 | orchestrator | Thursday 05 February 2026 05:00:51 +0000 (0:00:00.755) 0:20:36.808 ***** 2026-02-05 05:01:10.878507 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878518 | orchestrator | 2026-02-05 05:01:10.878529 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:01:10.878540 | orchestrator | Thursday 05 February 2026 05:00:52 +0000 (0:00:00.848) 0:20:37.656 ***** 2026-02-05 05:01:10.878551 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878568 | orchestrator | 2026-02-05 05:01:10.878592 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:01:10.878618 | orchestrator | Thursday 05 February 2026 05:00:53 +0000 (0:00:00.752) 0:20:38.409 ***** 2026-02-05 05:01:10.878636 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878653 | orchestrator | 2026-02-05 05:01:10.878671 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:01:10.878691 | orchestrator | Thursday 05 February 2026 05:00:54 +0000 (0:00:00.767) 0:20:39.176 ***** 2026-02-05 05:01:10.878709 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878729 | orchestrator | 2026-02-05 05:01:10.878747 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:01:10.878765 | orchestrator | Thursday 05 February 2026 05:00:55 +0000 (0:00:00.760) 0:20:39.936 ***** 2026-02-05 05:01:10.878784 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878796 | orchestrator | 2026-02-05 05:01:10.878806 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:01:10.878834 | orchestrator | Thursday 05 February 2026 05:00:55 +0000 (0:00:00.779) 0:20:40.716 ***** 2026-02-05 05:01:10.878845 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878856 | orchestrator | 2026-02-05 05:01:10.878891 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:01:10.878902 | orchestrator | Thursday 05 February 2026 05:00:56 +0000 (0:00:00.780) 0:20:41.496 ***** 2026-02-05 05:01:10.878914 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.878925 | orchestrator | 2026-02-05 05:01:10.878936 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:01:10.878947 | orchestrator | Thursday 05 February 2026 05:00:57 +0000 (0:00:00.781) 0:20:42.277 ***** 2026-02-05 05:01:10.878958 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-05 05:01:10.878969 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-05 05:01:10.878980 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-05 05:01:10.878991 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.879002 | orchestrator | 2026-02-05 05:01:10.879013 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:01:10.879024 | orchestrator | Thursday 05 February 2026 05:00:58 +0000 (0:00:01.029) 0:20:43.307 ***** 2026-02-05 05:01:10.879035 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-05 05:01:10.879046 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-05 05:01:10.879091 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-05 05:01:10.879102 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.879113 | orchestrator | 2026-02-05 05:01:10.879124 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:01:10.879135 | orchestrator | Thursday 05 February 2026 05:00:59 +0000 (0:00:01.037) 0:20:44.344 ***** 2026-02-05 05:01:10.879146 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-05 05:01:10.879156 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-05 05:01:10.879167 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-05 05:01:10.879189 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.879200 | orchestrator | 2026-02-05 05:01:10.879210 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:01:10.879221 | orchestrator | Thursday 05 February 2026 05:01:00 +0000 (0:00:01.092) 0:20:45.436 ***** 2026-02-05 05:01:10.879232 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.879243 | orchestrator | 2026-02-05 05:01:10.879254 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:01:10.879265 | orchestrator | Thursday 05 February 2026 05:01:01 +0000 (0:00:00.812) 0:20:46.249 ***** 2026-02-05 05:01:10.879276 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-05 05:01:10.879287 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.879298 | orchestrator | 2026-02-05 05:01:10.879310 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:01:10.879321 | orchestrator | Thursday 05 February 2026 05:01:02 +0000 (0:00:00.879) 0:20:47.129 ***** 2026-02-05 05:01:10.879332 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.879343 | orchestrator | 2026-02-05 05:01:10.879354 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-05 05:01:10.879365 | orchestrator | Thursday 05 February 2026 05:01:03 +0000 (0:00:00.836) 0:20:47.965 ***** 2026-02-05 05:01:10.879376 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 05:01:10.879387 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 05:01:10.879398 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 05:01:10.879408 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.879419 | orchestrator | 2026-02-05 05:01:10.879430 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-05 05:01:10.879441 | orchestrator | Thursday 05 February 2026 05:01:04 +0000 (0:00:01.325) 0:20:49.291 ***** 2026-02-05 05:01:10.879452 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.879463 | orchestrator | 2026-02-05 05:01:10.879474 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-05 05:01:10.879485 | orchestrator | Thursday 05 February 2026 05:01:05 +0000 (0:00:00.800) 0:20:50.092 ***** 2026-02-05 05:01:10.879495 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.879506 | orchestrator | 2026-02-05 05:01:10.879517 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-05 05:01:10.879528 | orchestrator | Thursday 05 February 2026 05:01:06 +0000 (0:00:00.767) 0:20:50.859 ***** 2026-02-05 05:01:10.879539 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.879549 | orchestrator | 2026-02-05 05:01:10.879560 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-05 05:01:10.879571 | orchestrator | Thursday 05 February 2026 05:01:06 +0000 (0:00:00.753) 0:20:51.613 ***** 2026-02-05 05:01:10.879582 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:01:10.879593 | orchestrator | 2026-02-05 05:01:10.879603 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-05 05:01:10.879614 | orchestrator | 2026-02-05 05:01:10.879625 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-05 05:01:10.879636 | orchestrator | Thursday 05 February 2026 05:01:07 +0000 (0:00:00.934) 0:20:52.547 ***** 2026-02-05 05:01:10.879647 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:10.879658 | orchestrator | 2026-02-05 05:01:10.879668 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:01:10.879679 | orchestrator | Thursday 05 February 2026 05:01:08 +0000 (0:00:00.783) 0:20:53.331 ***** 2026-02-05 05:01:10.879690 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:10.879700 | orchestrator | 2026-02-05 05:01:10.879711 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:01:10.879722 | orchestrator | Thursday 05 February 2026 05:01:09 +0000 (0:00:00.775) 0:20:54.107 ***** 2026-02-05 05:01:10.879733 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:10.879756 | orchestrator | 2026-02-05 05:01:10.879767 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:01:10.879784 | orchestrator | Thursday 05 February 2026 05:01:10 +0000 (0:00:00.772) 0:20:54.879 ***** 2026-02-05 05:01:10.879804 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.725109 | orchestrator | 2026-02-05 05:01:41.725235 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:01:41.725264 | orchestrator | Thursday 05 February 2026 05:01:10 +0000 (0:00:00.806) 0:20:55.686 ***** 2026-02-05 05:01:41.725284 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.725304 | orchestrator | 2026-02-05 05:01:41.725323 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:01:41.725337 | orchestrator | Thursday 05 February 2026 05:01:11 +0000 (0:00:00.757) 0:20:56.444 ***** 2026-02-05 05:01:41.725348 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.725360 | orchestrator | 2026-02-05 05:01:41.725377 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:01:41.725395 | orchestrator | Thursday 05 February 2026 05:01:12 +0000 (0:00:00.773) 0:20:57.217 ***** 2026-02-05 05:01:41.725415 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.725433 | orchestrator | 2026-02-05 05:01:41.725451 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:01:41.725471 | orchestrator | Thursday 05 February 2026 05:01:13 +0000 (0:00:00.797) 0:20:58.015 ***** 2026-02-05 05:01:41.725489 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.725507 | orchestrator | 2026-02-05 05:01:41.725525 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:01:41.725545 | orchestrator | Thursday 05 February 2026 05:01:13 +0000 (0:00:00.775) 0:20:58.790 ***** 2026-02-05 05:01:41.725563 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.725583 | orchestrator | 2026-02-05 05:01:41.725598 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:01:41.725612 | orchestrator | Thursday 05 February 2026 05:01:14 +0000 (0:00:00.762) 0:20:59.553 ***** 2026-02-05 05:01:41.725625 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.725638 | orchestrator | 2026-02-05 05:01:41.725652 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:01:41.725665 | orchestrator | Thursday 05 February 2026 05:01:15 +0000 (0:00:00.781) 0:21:00.335 ***** 2026-02-05 05:01:41.725678 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.725691 | orchestrator | 2026-02-05 05:01:41.725705 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:01:41.725718 | orchestrator | Thursday 05 February 2026 05:01:16 +0000 (0:00:00.753) 0:21:01.089 ***** 2026-02-05 05:01:41.725732 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.725744 | orchestrator | 2026-02-05 05:01:41.725757 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:01:41.725770 | orchestrator | Thursday 05 February 2026 05:01:17 +0000 (0:00:00.767) 0:21:01.856 ***** 2026-02-05 05:01:41.725783 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.725795 | orchestrator | 2026-02-05 05:01:41.725809 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:01:41.725826 | orchestrator | Thursday 05 February 2026 05:01:17 +0000 (0:00:00.751) 0:21:02.608 ***** 2026-02-05 05:01:41.725844 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.725864 | orchestrator | 2026-02-05 05:01:41.725884 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:01:41.725905 | orchestrator | Thursday 05 February 2026 05:01:18 +0000 (0:00:00.786) 0:21:03.394 ***** 2026-02-05 05:01:41.725924 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.725943 | orchestrator | 2026-02-05 05:01:41.725962 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:01:41.725982 | orchestrator | Thursday 05 February 2026 05:01:19 +0000 (0:00:00.771) 0:21:04.166 ***** 2026-02-05 05:01:41.726118 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726135 | orchestrator | 2026-02-05 05:01:41.726146 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:01:41.726159 | orchestrator | Thursday 05 February 2026 05:01:20 +0000 (0:00:00.772) 0:21:04.938 ***** 2026-02-05 05:01:41.726178 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726199 | orchestrator | 2026-02-05 05:01:41.726219 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:01:41.726231 | orchestrator | Thursday 05 February 2026 05:01:20 +0000 (0:00:00.761) 0:21:05.700 ***** 2026-02-05 05:01:41.726241 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726252 | orchestrator | 2026-02-05 05:01:41.726263 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:01:41.726274 | orchestrator | Thursday 05 February 2026 05:01:21 +0000 (0:00:00.769) 0:21:06.469 ***** 2026-02-05 05:01:41.726285 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726295 | orchestrator | 2026-02-05 05:01:41.726306 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:01:41.726318 | orchestrator | Thursday 05 February 2026 05:01:22 +0000 (0:00:00.795) 0:21:07.265 ***** 2026-02-05 05:01:41.726329 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726339 | orchestrator | 2026-02-05 05:01:41.726350 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:01:41.726361 | orchestrator | Thursday 05 February 2026 05:01:23 +0000 (0:00:00.762) 0:21:08.028 ***** 2026-02-05 05:01:41.726372 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726382 | orchestrator | 2026-02-05 05:01:41.726393 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:01:41.726404 | orchestrator | Thursday 05 February 2026 05:01:23 +0000 (0:00:00.786) 0:21:08.815 ***** 2026-02-05 05:01:41.726415 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726425 | orchestrator | 2026-02-05 05:01:41.726436 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:01:41.726447 | orchestrator | Thursday 05 February 2026 05:01:24 +0000 (0:00:00.757) 0:21:09.573 ***** 2026-02-05 05:01:41.726458 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726469 | orchestrator | 2026-02-05 05:01:41.726480 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:01:41.726490 | orchestrator | Thursday 05 February 2026 05:01:25 +0000 (0:00:00.752) 0:21:10.325 ***** 2026-02-05 05:01:41.726517 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726528 | orchestrator | 2026-02-05 05:01:41.726561 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:01:41.726573 | orchestrator | Thursday 05 February 2026 05:01:26 +0000 (0:00:00.742) 0:21:11.067 ***** 2026-02-05 05:01:41.726584 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726595 | orchestrator | 2026-02-05 05:01:41.726605 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:01:41.726616 | orchestrator | Thursday 05 February 2026 05:01:27 +0000 (0:00:00.769) 0:21:11.837 ***** 2026-02-05 05:01:41.726627 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726637 | orchestrator | 2026-02-05 05:01:41.726648 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:01:41.726659 | orchestrator | Thursday 05 February 2026 05:01:27 +0000 (0:00:00.762) 0:21:12.600 ***** 2026-02-05 05:01:41.726670 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726680 | orchestrator | 2026-02-05 05:01:41.726691 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:01:41.726702 | orchestrator | Thursday 05 February 2026 05:01:28 +0000 (0:00:00.786) 0:21:13.386 ***** 2026-02-05 05:01:41.726713 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726724 | orchestrator | 2026-02-05 05:01:41.726734 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:01:41.726745 | orchestrator | Thursday 05 February 2026 05:01:29 +0000 (0:00:00.763) 0:21:14.150 ***** 2026-02-05 05:01:41.726782 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726804 | orchestrator | 2026-02-05 05:01:41.726816 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:01:41.726826 | orchestrator | Thursday 05 February 2026 05:01:30 +0000 (0:00:00.770) 0:21:14.920 ***** 2026-02-05 05:01:41.726837 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726848 | orchestrator | 2026-02-05 05:01:41.726859 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:01:41.726870 | orchestrator | Thursday 05 February 2026 05:01:30 +0000 (0:00:00.775) 0:21:15.696 ***** 2026-02-05 05:01:41.726880 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726891 | orchestrator | 2026-02-05 05:01:41.726902 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:01:41.726913 | orchestrator | Thursday 05 February 2026 05:01:31 +0000 (0:00:00.794) 0:21:16.491 ***** 2026-02-05 05:01:41.726924 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.726941 | orchestrator | 2026-02-05 05:01:41.726960 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:01:41.726980 | orchestrator | Thursday 05 February 2026 05:01:32 +0000 (0:00:00.827) 0:21:17.318 ***** 2026-02-05 05:01:41.727000 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.727019 | orchestrator | 2026-02-05 05:01:41.727035 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:01:41.727046 | orchestrator | Thursday 05 February 2026 05:01:33 +0000 (0:00:00.768) 0:21:18.087 ***** 2026-02-05 05:01:41.727085 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.727096 | orchestrator | 2026-02-05 05:01:41.727108 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:01:41.727118 | orchestrator | Thursday 05 February 2026 05:01:34 +0000 (0:00:00.786) 0:21:18.874 ***** 2026-02-05 05:01:41.727129 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.727140 | orchestrator | 2026-02-05 05:01:41.727151 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:01:41.727162 | orchestrator | Thursday 05 February 2026 05:01:34 +0000 (0:00:00.760) 0:21:19.634 ***** 2026-02-05 05:01:41.727173 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.727184 | orchestrator | 2026-02-05 05:01:41.727195 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:01:41.727205 | orchestrator | Thursday 05 February 2026 05:01:35 +0000 (0:00:00.758) 0:21:20.392 ***** 2026-02-05 05:01:41.727216 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.727227 | orchestrator | 2026-02-05 05:01:41.727238 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:01:41.727249 | orchestrator | Thursday 05 February 2026 05:01:36 +0000 (0:00:00.748) 0:21:21.141 ***** 2026-02-05 05:01:41.727259 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.727270 | orchestrator | 2026-02-05 05:01:41.727281 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:01:41.727292 | orchestrator | Thursday 05 February 2026 05:01:37 +0000 (0:00:00.764) 0:21:21.906 ***** 2026-02-05 05:01:41.727303 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.727313 | orchestrator | 2026-02-05 05:01:41.727324 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:01:41.727336 | orchestrator | Thursday 05 February 2026 05:01:37 +0000 (0:00:00.761) 0:21:22.667 ***** 2026-02-05 05:01:41.727347 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.727358 | orchestrator | 2026-02-05 05:01:41.727369 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:01:41.727388 | orchestrator | Thursday 05 February 2026 05:01:38 +0000 (0:00:00.768) 0:21:23.436 ***** 2026-02-05 05:01:41.727407 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.727427 | orchestrator | 2026-02-05 05:01:41.727446 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:01:41.727478 | orchestrator | Thursday 05 February 2026 05:01:39 +0000 (0:00:00.789) 0:21:24.226 ***** 2026-02-05 05:01:41.727491 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.727502 | orchestrator | 2026-02-05 05:01:41.727513 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:01:41.727524 | orchestrator | Thursday 05 February 2026 05:01:40 +0000 (0:00:00.769) 0:21:24.995 ***** 2026-02-05 05:01:41.727535 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.727546 | orchestrator | 2026-02-05 05:01:41.727557 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:01:41.727568 | orchestrator | Thursday 05 February 2026 05:01:40 +0000 (0:00:00.773) 0:21:25.768 ***** 2026-02-05 05:01:41.727586 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:01:41.727598 | orchestrator | 2026-02-05 05:01:41.727617 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:02:33.689598 | orchestrator | Thursday 05 February 2026 05:01:41 +0000 (0:00:00.761) 0:21:26.530 ***** 2026-02-05 05:02:33.689710 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.689728 | orchestrator | 2026-02-05 05:02:33.689738 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:02:33.689747 | orchestrator | Thursday 05 February 2026 05:01:42 +0000 (0:00:00.759) 0:21:27.289 ***** 2026-02-05 05:02:33.689756 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.689765 | orchestrator | 2026-02-05 05:02:33.689774 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:02:33.689783 | orchestrator | Thursday 05 February 2026 05:01:43 +0000 (0:00:00.873) 0:21:28.163 ***** 2026-02-05 05:02:33.689789 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.689794 | orchestrator | 2026-02-05 05:02:33.689799 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:02:33.689806 | orchestrator | Thursday 05 February 2026 05:01:44 +0000 (0:00:00.763) 0:21:28.926 ***** 2026-02-05 05:02:33.689814 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.689823 | orchestrator | 2026-02-05 05:02:33.689831 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:02:33.689840 | orchestrator | Thursday 05 February 2026 05:01:44 +0000 (0:00:00.869) 0:21:29.796 ***** 2026-02-05 05:02:33.689849 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.689858 | orchestrator | 2026-02-05 05:02:33.689865 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:02:33.689874 | orchestrator | Thursday 05 February 2026 05:01:45 +0000 (0:00:00.821) 0:21:30.617 ***** 2026-02-05 05:02:33.689882 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.689891 | orchestrator | 2026-02-05 05:02:33.689901 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:02:33.689911 | orchestrator | Thursday 05 February 2026 05:01:46 +0000 (0:00:00.762) 0:21:31.380 ***** 2026-02-05 05:02:33.689919 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.689928 | orchestrator | 2026-02-05 05:02:33.689937 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:02:33.689945 | orchestrator | Thursday 05 February 2026 05:01:47 +0000 (0:00:00.802) 0:21:32.183 ***** 2026-02-05 05:02:33.689954 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.689964 | orchestrator | 2026-02-05 05:02:33.689973 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:02:33.689982 | orchestrator | Thursday 05 February 2026 05:01:48 +0000 (0:00:00.773) 0:21:32.957 ***** 2026-02-05 05:02:33.689990 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.689998 | orchestrator | 2026-02-05 05:02:33.690007 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:02:33.690094 | orchestrator | Thursday 05 February 2026 05:01:48 +0000 (0:00:00.772) 0:21:33.729 ***** 2026-02-05 05:02:33.690103 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.690111 | orchestrator | 2026-02-05 05:02:33.690141 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:02:33.690151 | orchestrator | Thursday 05 February 2026 05:01:49 +0000 (0:00:00.752) 0:21:34.482 ***** 2026-02-05 05:02:33.690160 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-05 05:02:33.690170 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-05 05:02:33.690178 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-05 05:02:33.690187 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.690196 | orchestrator | 2026-02-05 05:02:33.690205 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:02:33.690215 | orchestrator | Thursday 05 February 2026 05:01:50 +0000 (0:00:01.030) 0:21:35.513 ***** 2026-02-05 05:02:33.690224 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-05 05:02:33.690233 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-05 05:02:33.690242 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-05 05:02:33.690250 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.690259 | orchestrator | 2026-02-05 05:02:33.690267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:02:33.690276 | orchestrator | Thursday 05 February 2026 05:01:52 +0000 (0:00:01.346) 0:21:36.860 ***** 2026-02-05 05:02:33.690285 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-05 05:02:33.690293 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-05 05:02:33.690301 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-05 05:02:33.690309 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.690318 | orchestrator | 2026-02-05 05:02:33.690327 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:02:33.690336 | orchestrator | Thursday 05 February 2026 05:01:53 +0000 (0:00:01.335) 0:21:38.196 ***** 2026-02-05 05:02:33.690345 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.690354 | orchestrator | 2026-02-05 05:02:33.690362 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:02:33.690371 | orchestrator | Thursday 05 February 2026 05:01:54 +0000 (0:00:00.761) 0:21:38.957 ***** 2026-02-05 05:02:33.690380 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-05 05:02:33.690389 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.690399 | orchestrator | 2026-02-05 05:02:33.690408 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:02:33.690417 | orchestrator | Thursday 05 February 2026 05:01:55 +0000 (0:00:00.878) 0:21:39.836 ***** 2026-02-05 05:02:33.690425 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.690433 | orchestrator | 2026-02-05 05:02:33.690442 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-05 05:02:33.690464 | orchestrator | Thursday 05 February 2026 05:01:55 +0000 (0:00:00.789) 0:21:40.625 ***** 2026-02-05 05:02:33.690473 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 05:02:33.690501 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 05:02:33.690511 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 05:02:33.690520 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.690529 | orchestrator | 2026-02-05 05:02:33.690538 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-05 05:02:33.690547 | orchestrator | Thursday 05 February 2026 05:01:56 +0000 (0:00:01.033) 0:21:41.659 ***** 2026-02-05 05:02:33.690555 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.690564 | orchestrator | 2026-02-05 05:02:33.690573 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-05 05:02:33.690580 | orchestrator | Thursday 05 February 2026 05:01:57 +0000 (0:00:00.766) 0:21:42.425 ***** 2026-02-05 05:02:33.690588 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.690597 | orchestrator | 2026-02-05 05:02:33.690606 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-05 05:02:33.690624 | orchestrator | Thursday 05 February 2026 05:01:58 +0000 (0:00:00.792) 0:21:43.218 ***** 2026-02-05 05:02:33.690633 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.690641 | orchestrator | 2026-02-05 05:02:33.690650 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-05 05:02:33.690659 | orchestrator | Thursday 05 February 2026 05:01:59 +0000 (0:00:00.773) 0:21:43.991 ***** 2026-02-05 05:02:33.690667 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:02:33.690676 | orchestrator | 2026-02-05 05:02:33.690685 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-05 05:02:33.690693 | orchestrator | 2026-02-05 05:02:33.690701 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-05 05:02:33.690709 | orchestrator | Thursday 05 February 2026 05:02:00 +0000 (0:00:01.317) 0:21:45.308 ***** 2026-02-05 05:02:33.690717 | orchestrator | changed: [testbed-node-0] 2026-02-05 05:02:33.690726 | orchestrator | 2026-02-05 05:02:33.690734 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-05 05:02:33.690742 | orchestrator | Thursday 05 February 2026 05:02:13 +0000 (0:00:13.079) 0:21:58.388 ***** 2026-02-05 05:02:33.690751 | orchestrator | changed: [testbed-node-0] 2026-02-05 05:02:33.690759 | orchestrator | 2026-02-05 05:02:33.690766 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:02:33.690773 | orchestrator | Thursday 05 February 2026 05:02:16 +0000 (0:00:02.788) 0:22:01.177 ***** 2026-02-05 05:02:33.690781 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-05 05:02:33.690789 | orchestrator | 2026-02-05 05:02:33.690797 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 05:02:33.690806 | orchestrator | Thursday 05 February 2026 05:02:17 +0000 (0:00:01.090) 0:22:02.267 ***** 2026-02-05 05:02:33.690815 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:02:33.690849 | orchestrator | 2026-02-05 05:02:33.690856 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 05:02:33.690864 | orchestrator | Thursday 05 February 2026 05:02:18 +0000 (0:00:01.485) 0:22:03.753 ***** 2026-02-05 05:02:33.690871 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:02:33.690878 | orchestrator | 2026-02-05 05:02:33.690885 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:02:33.690893 | orchestrator | Thursday 05 February 2026 05:02:20 +0000 (0:00:01.129) 0:22:04.882 ***** 2026-02-05 05:02:33.690900 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:02:33.690908 | orchestrator | 2026-02-05 05:02:33.690917 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:02:33.690924 | orchestrator | Thursday 05 February 2026 05:02:21 +0000 (0:00:01.576) 0:22:06.459 ***** 2026-02-05 05:02:33.690932 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:02:33.690941 | orchestrator | 2026-02-05 05:02:33.690950 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 05:02:33.690958 | orchestrator | Thursday 05 February 2026 05:02:22 +0000 (0:00:01.127) 0:22:07.586 ***** 2026-02-05 05:02:33.690966 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:02:33.690975 | orchestrator | 2026-02-05 05:02:33.690984 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 05:02:33.690994 | orchestrator | Thursday 05 February 2026 05:02:23 +0000 (0:00:01.137) 0:22:08.724 ***** 2026-02-05 05:02:33.691002 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:02:33.691010 | orchestrator | 2026-02-05 05:02:33.691019 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 05:02:33.691028 | orchestrator | Thursday 05 February 2026 05:02:25 +0000 (0:00:01.163) 0:22:09.888 ***** 2026-02-05 05:02:33.691036 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:33.691046 | orchestrator | 2026-02-05 05:02:33.691108 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 05:02:33.691117 | orchestrator | Thursday 05 February 2026 05:02:26 +0000 (0:00:01.137) 0:22:11.026 ***** 2026-02-05 05:02:33.691135 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:02:33.691143 | orchestrator | 2026-02-05 05:02:33.691152 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 05:02:33.691162 | orchestrator | Thursday 05 February 2026 05:02:27 +0000 (0:00:01.104) 0:22:12.130 ***** 2026-02-05 05:02:33.691175 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 05:02:33.691185 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:02:33.691194 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:02:33.691203 | orchestrator | 2026-02-05 05:02:33.691212 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 05:02:33.691220 | orchestrator | Thursday 05 February 2026 05:02:29 +0000 (0:00:01.908) 0:22:14.039 ***** 2026-02-05 05:02:33.691229 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:02:33.691238 | orchestrator | 2026-02-05 05:02:33.691247 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 05:02:33.691255 | orchestrator | Thursday 05 February 2026 05:02:30 +0000 (0:00:01.247) 0:22:15.286 ***** 2026-02-05 05:02:33.691271 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 05:02:33.691292 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:02:55.958931 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:02:55.959015 | orchestrator | 2026-02-05 05:02:55.959027 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 05:02:55.959036 | orchestrator | Thursday 05 February 2026 05:02:33 +0000 (0:00:03.210) 0:22:18.497 ***** 2026-02-05 05:02:55.959044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 05:02:55.959082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 05:02:55.959090 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 05:02:55.959098 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959105 | orchestrator | 2026-02-05 05:02:55.959113 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 05:02:55.959121 | orchestrator | Thursday 05 February 2026 05:02:35 +0000 (0:00:01.395) 0:22:19.893 ***** 2026-02-05 05:02:55.959129 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 05:02:55.959140 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 05:02:55.959147 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 05:02:55.959155 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959162 | orchestrator | 2026-02-05 05:02:55.959169 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 05:02:55.959177 | orchestrator | Thursday 05 February 2026 05:02:36 +0000 (0:00:01.593) 0:22:21.486 ***** 2026-02-05 05:02:55.959186 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:02:55.959197 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:02:55.959225 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:02:55.959233 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959240 | orchestrator | 2026-02-05 05:02:55.959248 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 05:02:55.959255 | orchestrator | Thursday 05 February 2026 05:02:37 +0000 (0:00:01.144) 0:22:22.630 ***** 2026-02-05 05:02:55.959264 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 05:02:31.353268', 'end': '2026-02-05 05:02:31.402137', 'delta': '0:00:00.048869', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 05:02:55.959300 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 05:02:31.933474', 'end': '2026-02-05 05:02:31.984757', 'delta': '0:00:00.051283', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 05:02:55.959309 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9163e99c5c4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 05:02:32.515237', 'end': '2026-02-05 05:02:32.549952', 'delta': '0:00:00.034715', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9163e99c5c4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 05:02:55.959317 | orchestrator | 2026-02-05 05:02:55.959324 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 05:02:55.959332 | orchestrator | Thursday 05 February 2026 05:02:39 +0000 (0:00:01.207) 0:22:23.838 ***** 2026-02-05 05:02:55.959339 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:02:55.959347 | orchestrator | 2026-02-05 05:02:55.959354 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 05:02:55.959362 | orchestrator | Thursday 05 February 2026 05:02:40 +0000 (0:00:01.245) 0:22:25.084 ***** 2026-02-05 05:02:55.959369 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959382 | orchestrator | 2026-02-05 05:02:55.959390 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 05:02:55.959397 | orchestrator | Thursday 05 February 2026 05:02:41 +0000 (0:00:01.235) 0:22:26.320 ***** 2026-02-05 05:02:55.959404 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:02:55.959411 | orchestrator | 2026-02-05 05:02:55.959419 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 05:02:55.959426 | orchestrator | Thursday 05 February 2026 05:02:42 +0000 (0:00:01.116) 0:22:27.436 ***** 2026-02-05 05:02:55.959433 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:02:55.959440 | orchestrator | 2026-02-05 05:02:55.959448 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:02:55.959455 | orchestrator | Thursday 05 February 2026 05:02:44 +0000 (0:00:01.988) 0:22:29.425 ***** 2026-02-05 05:02:55.959462 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:02:55.959469 | orchestrator | 2026-02-05 05:02:55.959477 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 05:02:55.959486 | orchestrator | Thursday 05 February 2026 05:02:45 +0000 (0:00:01.111) 0:22:30.536 ***** 2026-02-05 05:02:55.959495 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959503 | orchestrator | 2026-02-05 05:02:55.959512 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 05:02:55.959520 | orchestrator | Thursday 05 February 2026 05:02:46 +0000 (0:00:01.125) 0:22:31.662 ***** 2026-02-05 05:02:55.959529 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959537 | orchestrator | 2026-02-05 05:02:55.959546 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:02:55.959554 | orchestrator | Thursday 05 February 2026 05:02:48 +0000 (0:00:01.200) 0:22:32.862 ***** 2026-02-05 05:02:55.959563 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959571 | orchestrator | 2026-02-05 05:02:55.959580 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 05:02:55.959588 | orchestrator | Thursday 05 February 2026 05:02:49 +0000 (0:00:01.138) 0:22:34.001 ***** 2026-02-05 05:02:55.959596 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959605 | orchestrator | 2026-02-05 05:02:55.959613 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 05:02:55.959622 | orchestrator | Thursday 05 February 2026 05:02:50 +0000 (0:00:01.157) 0:22:35.158 ***** 2026-02-05 05:02:55.959630 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959639 | orchestrator | 2026-02-05 05:02:55.959648 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 05:02:55.959656 | orchestrator | Thursday 05 February 2026 05:02:51 +0000 (0:00:01.123) 0:22:36.282 ***** 2026-02-05 05:02:55.959664 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959673 | orchestrator | 2026-02-05 05:02:55.959681 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 05:02:55.959689 | orchestrator | Thursday 05 February 2026 05:02:52 +0000 (0:00:01.116) 0:22:37.399 ***** 2026-02-05 05:02:55.959698 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959706 | orchestrator | 2026-02-05 05:02:55.959715 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 05:02:55.959723 | orchestrator | Thursday 05 February 2026 05:02:53 +0000 (0:00:01.120) 0:22:38.519 ***** 2026-02-05 05:02:55.959731 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959740 | orchestrator | 2026-02-05 05:02:55.959748 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 05:02:55.959760 | orchestrator | Thursday 05 February 2026 05:02:54 +0000 (0:00:01.125) 0:22:39.644 ***** 2026-02-05 05:02:55.959768 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:55.959777 | orchestrator | 2026-02-05 05:02:55.959791 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 05:02:58.414912 | orchestrator | Thursday 05 February 2026 05:02:55 +0000 (0:00:01.121) 0:22:40.766 ***** 2026-02-05 05:02:58.415000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:02:58.415032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:02:58.415039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:02:58.415048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:02:58.415115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:02:58.415127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:02:58.415137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:02:58.415178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7aa79787', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:02:58.415194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:02:58.415201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:02:58.415208 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:02:58.415215 | orchestrator | 2026-02-05 05:02:58.415222 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 05:02:58.415230 | orchestrator | Thursday 05 February 2026 05:02:57 +0000 (0:00:01.248) 0:22:42.014 ***** 2026-02-05 05:02:58.415237 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:02:58.415245 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:02:58.415261 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:03:09.094464 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:03:09.094548 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:03:09.094558 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:03:09.094565 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:03:09.094598 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7aa79787', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:03:09.094625 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:03:09.094632 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:03:09.094639 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:09.094646 | orchestrator | 2026-02-05 05:03:09.094653 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 05:03:09.094661 | orchestrator | Thursday 05 February 2026 05:02:58 +0000 (0:00:01.213) 0:22:43.228 ***** 2026-02-05 05:03:09.094666 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:03:09.094673 | orchestrator | 2026-02-05 05:03:09.094679 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 05:03:09.094685 | orchestrator | Thursday 05 February 2026 05:03:00 +0000 (0:00:01.606) 0:22:44.834 ***** 2026-02-05 05:03:09.094691 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:03:09.094697 | orchestrator | 2026-02-05 05:03:09.094702 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:03:09.094708 | orchestrator | Thursday 05 February 2026 05:03:01 +0000 (0:00:01.165) 0:22:45.999 ***** 2026-02-05 05:03:09.094714 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:03:09.094720 | orchestrator | 2026-02-05 05:03:09.094726 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:03:09.094731 | orchestrator | Thursday 05 February 2026 05:03:02 +0000 (0:00:01.578) 0:22:47.578 ***** 2026-02-05 05:03:09.094737 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:09.094743 | orchestrator | 2026-02-05 05:03:09.094749 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:03:09.094759 | orchestrator | Thursday 05 February 2026 05:03:03 +0000 (0:00:01.120) 0:22:48.698 ***** 2026-02-05 05:03:09.094765 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:09.094771 | orchestrator | 2026-02-05 05:03:09.094777 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:03:09.094783 | orchestrator | Thursday 05 February 2026 05:03:05 +0000 (0:00:01.225) 0:22:49.924 ***** 2026-02-05 05:03:09.094789 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:09.094794 | orchestrator | 2026-02-05 05:03:09.094800 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 05:03:09.094806 | orchestrator | Thursday 05 February 2026 05:03:06 +0000 (0:00:01.187) 0:22:51.111 ***** 2026-02-05 05:03:09.094812 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 05:03:09.094818 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 05:03:09.094824 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 05:03:09.094830 | orchestrator | 2026-02-05 05:03:09.094836 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 05:03:09.094842 | orchestrator | Thursday 05 February 2026 05:03:07 +0000 (0:00:01.644) 0:22:52.756 ***** 2026-02-05 05:03:09.094848 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 05:03:09.094858 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 05:03:09.094864 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 05:03:09.094870 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:09.094876 | orchestrator | 2026-02-05 05:03:09.094885 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 05:03:51.859705 | orchestrator | Thursday 05 February 2026 05:03:09 +0000 (0:00:01.144) 0:22:53.901 ***** 2026-02-05 05:03:51.859790 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.859799 | orchestrator | 2026-02-05 05:03:51.859805 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 05:03:51.859811 | orchestrator | Thursday 05 February 2026 05:03:10 +0000 (0:00:01.134) 0:22:55.036 ***** 2026-02-05 05:03:51.859817 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 05:03:51.859823 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:03:51.859829 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:03:51.859834 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:03:51.859840 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:03:51.859845 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:03:51.859851 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:03:51.859856 | orchestrator | 2026-02-05 05:03:51.859862 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 05:03:51.859867 | orchestrator | Thursday 05 February 2026 05:03:11 +0000 (0:00:01.779) 0:22:56.815 ***** 2026-02-05 05:03:51.859872 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 05:03:51.859878 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:03:51.859883 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:03:51.859888 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:03:51.859893 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:03:51.859898 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:03:51.859903 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:03:51.859925 | orchestrator | 2026-02-05 05:03:51.859930 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:03:51.859936 | orchestrator | Thursday 05 February 2026 05:03:14 +0000 (0:00:02.538) 0:22:59.354 ***** 2026-02-05 05:03:51.859941 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-05 05:03:51.859946 | orchestrator | 2026-02-05 05:03:51.859952 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 05:03:51.859957 | orchestrator | Thursday 05 February 2026 05:03:15 +0000 (0:00:01.096) 0:23:00.451 ***** 2026-02-05 05:03:51.859962 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-05 05:03:51.859967 | orchestrator | 2026-02-05 05:03:51.859972 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 05:03:51.859977 | orchestrator | Thursday 05 February 2026 05:03:16 +0000 (0:00:01.128) 0:23:01.580 ***** 2026-02-05 05:03:51.859982 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:03:51.859987 | orchestrator | 2026-02-05 05:03:51.859992 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 05:03:51.859998 | orchestrator | Thursday 05 February 2026 05:03:18 +0000 (0:00:01.533) 0:23:03.113 ***** 2026-02-05 05:03:51.860003 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860008 | orchestrator | 2026-02-05 05:03:51.860013 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 05:03:51.860018 | orchestrator | Thursday 05 February 2026 05:03:19 +0000 (0:00:01.109) 0:23:04.223 ***** 2026-02-05 05:03:51.860023 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860028 | orchestrator | 2026-02-05 05:03:51.860034 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 05:03:51.860039 | orchestrator | Thursday 05 February 2026 05:03:20 +0000 (0:00:01.127) 0:23:05.351 ***** 2026-02-05 05:03:51.860044 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860050 | orchestrator | 2026-02-05 05:03:51.860096 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 05:03:51.860102 | orchestrator | Thursday 05 February 2026 05:03:21 +0000 (0:00:01.111) 0:23:06.462 ***** 2026-02-05 05:03:51.860108 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:03:51.860113 | orchestrator | 2026-02-05 05:03:51.860118 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 05:03:51.860123 | orchestrator | Thursday 05 February 2026 05:03:23 +0000 (0:00:01.550) 0:23:08.013 ***** 2026-02-05 05:03:51.860129 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860134 | orchestrator | 2026-02-05 05:03:51.860139 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 05:03:51.860144 | orchestrator | Thursday 05 February 2026 05:03:24 +0000 (0:00:01.105) 0:23:09.119 ***** 2026-02-05 05:03:51.860152 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860160 | orchestrator | 2026-02-05 05:03:51.860169 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 05:03:51.860175 | orchestrator | Thursday 05 February 2026 05:03:25 +0000 (0:00:01.102) 0:23:10.221 ***** 2026-02-05 05:03:51.860180 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:03:51.860193 | orchestrator | 2026-02-05 05:03:51.860198 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 05:03:51.860220 | orchestrator | Thursday 05 February 2026 05:03:26 +0000 (0:00:01.574) 0:23:11.796 ***** 2026-02-05 05:03:51.860225 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:03:51.860230 | orchestrator | 2026-02-05 05:03:51.860236 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 05:03:51.860253 | orchestrator | Thursday 05 February 2026 05:03:28 +0000 (0:00:01.566) 0:23:13.362 ***** 2026-02-05 05:03:51.860259 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860264 | orchestrator | 2026-02-05 05:03:51.860269 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:03:51.860275 | orchestrator | Thursday 05 February 2026 05:03:29 +0000 (0:00:01.088) 0:23:14.450 ***** 2026-02-05 05:03:51.860287 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:03:51.860293 | orchestrator | 2026-02-05 05:03:51.860300 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:03:51.860306 | orchestrator | Thursday 05 February 2026 05:03:30 +0000 (0:00:01.137) 0:23:15.588 ***** 2026-02-05 05:03:51.860311 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860318 | orchestrator | 2026-02-05 05:03:51.860323 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:03:51.860329 | orchestrator | Thursday 05 February 2026 05:03:31 +0000 (0:00:01.150) 0:23:16.739 ***** 2026-02-05 05:03:51.860335 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860341 | orchestrator | 2026-02-05 05:03:51.860348 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:03:51.860354 | orchestrator | Thursday 05 February 2026 05:03:33 +0000 (0:00:01.133) 0:23:17.873 ***** 2026-02-05 05:03:51.860360 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860366 | orchestrator | 2026-02-05 05:03:51.860372 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:03:51.860378 | orchestrator | Thursday 05 February 2026 05:03:34 +0000 (0:00:01.105) 0:23:18.978 ***** 2026-02-05 05:03:51.860384 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860390 | orchestrator | 2026-02-05 05:03:51.860396 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:03:51.860402 | orchestrator | Thursday 05 February 2026 05:03:35 +0000 (0:00:01.127) 0:23:20.106 ***** 2026-02-05 05:03:51.860408 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860414 | orchestrator | 2026-02-05 05:03:51.860420 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:03:51.860426 | orchestrator | Thursday 05 February 2026 05:03:36 +0000 (0:00:01.119) 0:23:21.226 ***** 2026-02-05 05:03:51.860432 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:03:51.860438 | orchestrator | 2026-02-05 05:03:51.860444 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:03:51.860450 | orchestrator | Thursday 05 February 2026 05:03:37 +0000 (0:00:01.149) 0:23:22.375 ***** 2026-02-05 05:03:51.860456 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:03:51.860462 | orchestrator | 2026-02-05 05:03:51.860468 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:03:51.860474 | orchestrator | Thursday 05 February 2026 05:03:38 +0000 (0:00:01.139) 0:23:23.515 ***** 2026-02-05 05:03:51.860480 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:03:51.860486 | orchestrator | 2026-02-05 05:03:51.860493 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:03:51.860498 | orchestrator | Thursday 05 February 2026 05:03:39 +0000 (0:00:01.136) 0:23:24.652 ***** 2026-02-05 05:03:51.860505 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860511 | orchestrator | 2026-02-05 05:03:51.860517 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:03:51.860523 | orchestrator | Thursday 05 February 2026 05:03:40 +0000 (0:00:01.096) 0:23:25.749 ***** 2026-02-05 05:03:51.860529 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860534 | orchestrator | 2026-02-05 05:03:51.860540 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:03:51.860546 | orchestrator | Thursday 05 February 2026 05:03:42 +0000 (0:00:01.103) 0:23:26.852 ***** 2026-02-05 05:03:51.860553 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860559 | orchestrator | 2026-02-05 05:03:51.860565 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:03:51.860571 | orchestrator | Thursday 05 February 2026 05:03:43 +0000 (0:00:01.104) 0:23:27.956 ***** 2026-02-05 05:03:51.860577 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860582 | orchestrator | 2026-02-05 05:03:51.860589 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:03:51.860595 | orchestrator | Thursday 05 February 2026 05:03:44 +0000 (0:00:01.032) 0:23:28.989 ***** 2026-02-05 05:03:51.860604 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860611 | orchestrator | 2026-02-05 05:03:51.860617 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:03:51.860623 | orchestrator | Thursday 05 February 2026 05:03:45 +0000 (0:00:01.118) 0:23:30.108 ***** 2026-02-05 05:03:51.860629 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860636 | orchestrator | 2026-02-05 05:03:51.860641 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:03:51.860646 | orchestrator | Thursday 05 February 2026 05:03:46 +0000 (0:00:01.080) 0:23:31.188 ***** 2026-02-05 05:03:51.860651 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860656 | orchestrator | 2026-02-05 05:03:51.860661 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:03:51.860667 | orchestrator | Thursday 05 February 2026 05:03:47 +0000 (0:00:01.069) 0:23:32.257 ***** 2026-02-05 05:03:51.860672 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860677 | orchestrator | 2026-02-05 05:03:51.860682 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:03:51.860687 | orchestrator | Thursday 05 February 2026 05:03:48 +0000 (0:00:01.078) 0:23:33.336 ***** 2026-02-05 05:03:51.860692 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860697 | orchestrator | 2026-02-05 05:03:51.860702 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:03:51.860708 | orchestrator | Thursday 05 February 2026 05:03:49 +0000 (0:00:01.120) 0:23:34.456 ***** 2026-02-05 05:03:51.860715 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860721 | orchestrator | 2026-02-05 05:03:51.860726 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:03:51.860731 | orchestrator | Thursday 05 February 2026 05:03:50 +0000 (0:00:01.100) 0:23:35.557 ***** 2026-02-05 05:03:51.860736 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:03:51.860742 | orchestrator | 2026-02-05 05:03:51.860750 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:04:41.433335 | orchestrator | Thursday 05 February 2026 05:03:51 +0000 (0:00:01.107) 0:23:36.665 ***** 2026-02-05 05:04:41.433466 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.433489 | orchestrator | 2026-02-05 05:04:41.433506 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:04:41.433522 | orchestrator | Thursday 05 February 2026 05:03:52 +0000 (0:00:01.131) 0:23:37.797 ***** 2026-02-05 05:04:41.433556 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:04:41.433584 | orchestrator | 2026-02-05 05:04:41.433600 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:04:41.433616 | orchestrator | Thursday 05 February 2026 05:03:55 +0000 (0:00:02.066) 0:23:39.864 ***** 2026-02-05 05:04:41.433632 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:04:41.433647 | orchestrator | 2026-02-05 05:04:41.433662 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:04:41.433677 | orchestrator | Thursday 05 February 2026 05:03:57 +0000 (0:00:02.563) 0:23:42.427 ***** 2026-02-05 05:04:41.433693 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-05 05:04:41.433709 | orchestrator | 2026-02-05 05:04:41.433725 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 05:04:41.433740 | orchestrator | Thursday 05 February 2026 05:03:58 +0000 (0:00:01.112) 0:23:43.540 ***** 2026-02-05 05:04:41.433755 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.433771 | orchestrator | 2026-02-05 05:04:41.433786 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 05:04:41.433800 | orchestrator | Thursday 05 February 2026 05:03:59 +0000 (0:00:01.102) 0:23:44.642 ***** 2026-02-05 05:04:41.433814 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.433829 | orchestrator | 2026-02-05 05:04:41.433844 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 05:04:41.433924 | orchestrator | Thursday 05 February 2026 05:04:00 +0000 (0:00:01.123) 0:23:45.766 ***** 2026-02-05 05:04:41.433953 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 05:04:41.433983 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 05:04:41.434110 | orchestrator | 2026-02-05 05:04:41.434131 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 05:04:41.434148 | orchestrator | Thursday 05 February 2026 05:04:02 +0000 (0:00:01.962) 0:23:47.729 ***** 2026-02-05 05:04:41.434165 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:04:41.434182 | orchestrator | 2026-02-05 05:04:41.434199 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 05:04:41.434215 | orchestrator | Thursday 05 February 2026 05:04:04 +0000 (0:00:01.465) 0:23:49.195 ***** 2026-02-05 05:04:41.434231 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.434246 | orchestrator | 2026-02-05 05:04:41.434261 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 05:04:41.434276 | orchestrator | Thursday 05 February 2026 05:04:05 +0000 (0:00:01.131) 0:23:50.327 ***** 2026-02-05 05:04:41.434290 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.434321 | orchestrator | 2026-02-05 05:04:41.434336 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:04:41.434351 | orchestrator | Thursday 05 February 2026 05:04:06 +0000 (0:00:01.120) 0:23:51.447 ***** 2026-02-05 05:04:41.434365 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.434379 | orchestrator | 2026-02-05 05:04:41.434394 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:04:41.434410 | orchestrator | Thursday 05 February 2026 05:04:07 +0000 (0:00:01.124) 0:23:52.572 ***** 2026-02-05 05:04:41.434440 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-05 05:04:41.434452 | orchestrator | 2026-02-05 05:04:41.434461 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 05:04:41.434470 | orchestrator | Thursday 05 February 2026 05:04:08 +0000 (0:00:01.098) 0:23:53.671 ***** 2026-02-05 05:04:41.434479 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:04:41.434488 | orchestrator | 2026-02-05 05:04:41.434497 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 05:04:41.434506 | orchestrator | Thursday 05 February 2026 05:04:10 +0000 (0:00:01.817) 0:23:55.489 ***** 2026-02-05 05:04:41.434515 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 05:04:41.434524 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 05:04:41.434533 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 05:04:41.434541 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.434550 | orchestrator | 2026-02-05 05:04:41.434560 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 05:04:41.434568 | orchestrator | Thursday 05 February 2026 05:04:11 +0000 (0:00:01.142) 0:23:56.631 ***** 2026-02-05 05:04:41.434577 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.434586 | orchestrator | 2026-02-05 05:04:41.434595 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 05:04:41.434604 | orchestrator | Thursday 05 February 2026 05:04:12 +0000 (0:00:01.110) 0:23:57.741 ***** 2026-02-05 05:04:41.434612 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.434621 | orchestrator | 2026-02-05 05:04:41.434630 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 05:04:41.434653 | orchestrator | Thursday 05 February 2026 05:04:14 +0000 (0:00:01.158) 0:23:58.900 ***** 2026-02-05 05:04:41.434662 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.434676 | orchestrator | 2026-02-05 05:04:41.434689 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 05:04:41.434714 | orchestrator | Thursday 05 February 2026 05:04:15 +0000 (0:00:01.125) 0:24:00.026 ***** 2026-02-05 05:04:41.434728 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.434739 | orchestrator | 2026-02-05 05:04:41.434776 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 05:04:41.434791 | orchestrator | Thursday 05 February 2026 05:04:16 +0000 (0:00:01.127) 0:24:01.154 ***** 2026-02-05 05:04:41.434804 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.434818 | orchestrator | 2026-02-05 05:04:41.434826 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:04:41.434834 | orchestrator | Thursday 05 February 2026 05:04:17 +0000 (0:00:01.132) 0:24:02.286 ***** 2026-02-05 05:04:41.434842 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:04:41.434850 | orchestrator | 2026-02-05 05:04:41.434857 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:04:41.434865 | orchestrator | Thursday 05 February 2026 05:04:20 +0000 (0:00:02.785) 0:24:05.072 ***** 2026-02-05 05:04:41.434873 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:04:41.434881 | orchestrator | 2026-02-05 05:04:41.434889 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:04:41.434897 | orchestrator | Thursday 05 February 2026 05:04:21 +0000 (0:00:01.096) 0:24:06.169 ***** 2026-02-05 05:04:41.434904 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-05 05:04:41.434912 | orchestrator | 2026-02-05 05:04:41.434920 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 05:04:41.434928 | orchestrator | Thursday 05 February 2026 05:04:22 +0000 (0:00:01.164) 0:24:07.334 ***** 2026-02-05 05:04:41.434936 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.434956 | orchestrator | 2026-02-05 05:04:41.434964 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 05:04:41.434972 | orchestrator | Thursday 05 February 2026 05:04:23 +0000 (0:00:01.146) 0:24:08.481 ***** 2026-02-05 05:04:41.434980 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.434988 | orchestrator | 2026-02-05 05:04:41.434996 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 05:04:41.435004 | orchestrator | Thursday 05 February 2026 05:04:24 +0000 (0:00:01.164) 0:24:09.645 ***** 2026-02-05 05:04:41.435012 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.435020 | orchestrator | 2026-02-05 05:04:41.435028 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 05:04:41.435036 | orchestrator | Thursday 05 February 2026 05:04:26 +0000 (0:00:01.179) 0:24:10.825 ***** 2026-02-05 05:04:41.435044 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.435051 | orchestrator | 2026-02-05 05:04:41.435121 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 05:04:41.435130 | orchestrator | Thursday 05 February 2026 05:04:27 +0000 (0:00:01.175) 0:24:12.000 ***** 2026-02-05 05:04:41.435138 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.435145 | orchestrator | 2026-02-05 05:04:41.435153 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 05:04:41.435161 | orchestrator | Thursday 05 February 2026 05:04:28 +0000 (0:00:01.148) 0:24:13.149 ***** 2026-02-05 05:04:41.435169 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.435177 | orchestrator | 2026-02-05 05:04:41.435185 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 05:04:41.435193 | orchestrator | Thursday 05 February 2026 05:04:29 +0000 (0:00:01.147) 0:24:14.297 ***** 2026-02-05 05:04:41.435201 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.435212 | orchestrator | 2026-02-05 05:04:41.435229 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 05:04:41.435251 | orchestrator | Thursday 05 February 2026 05:04:30 +0000 (0:00:01.117) 0:24:15.414 ***** 2026-02-05 05:04:41.435263 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:04:41.435276 | orchestrator | 2026-02-05 05:04:41.435290 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 05:04:41.435314 | orchestrator | Thursday 05 February 2026 05:04:31 +0000 (0:00:01.138) 0:24:16.553 ***** 2026-02-05 05:04:41.435329 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:04:41.435342 | orchestrator | 2026-02-05 05:04:41.435354 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:04:41.435363 | orchestrator | Thursday 05 February 2026 05:04:32 +0000 (0:00:01.184) 0:24:17.738 ***** 2026-02-05 05:04:41.435371 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-05 05:04:41.435379 | orchestrator | 2026-02-05 05:04:41.435387 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 05:04:41.435395 | orchestrator | Thursday 05 February 2026 05:04:34 +0000 (0:00:01.150) 0:24:18.888 ***** 2026-02-05 05:04:41.435403 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-05 05:04:41.435412 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-05 05:04:41.435420 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-05 05:04:41.435428 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-05 05:04:41.435436 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-05 05:04:41.435444 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-05 05:04:41.435452 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-05 05:04:41.435460 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-05 05:04:41.435468 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 05:04:41.435476 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 05:04:41.435484 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 05:04:41.435493 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 05:04:41.435507 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 05:04:41.435516 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 05:04:41.435524 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-05 05:04:41.435532 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-05 05:04:41.435540 | orchestrator | 2026-02-05 05:04:41.435557 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:05:34.485264 | orchestrator | Thursday 05 February 2026 05:04:41 +0000 (0:00:07.345) 0:24:26.234 ***** 2026-02-05 05:05:34.485389 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.485406 | orchestrator | 2026-02-05 05:05:34.485420 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:05:34.485432 | orchestrator | Thursday 05 February 2026 05:04:42 +0000 (0:00:01.104) 0:24:27.339 ***** 2026-02-05 05:05:34.485443 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.485455 | orchestrator | 2026-02-05 05:05:34.485467 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:05:34.485478 | orchestrator | Thursday 05 February 2026 05:04:43 +0000 (0:00:01.104) 0:24:28.443 ***** 2026-02-05 05:05:34.485489 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.485500 | orchestrator | 2026-02-05 05:05:34.485519 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:05:34.485538 | orchestrator | Thursday 05 February 2026 05:04:44 +0000 (0:00:01.113) 0:24:29.556 ***** 2026-02-05 05:05:34.485555 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.485575 | orchestrator | 2026-02-05 05:05:34.485594 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:05:34.485613 | orchestrator | Thursday 05 February 2026 05:04:45 +0000 (0:00:01.132) 0:24:30.689 ***** 2026-02-05 05:05:34.485633 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.485651 | orchestrator | 2026-02-05 05:05:34.485670 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:05:34.485729 | orchestrator | Thursday 05 February 2026 05:04:46 +0000 (0:00:01.108) 0:24:31.798 ***** 2026-02-05 05:05:34.485753 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.485770 | orchestrator | 2026-02-05 05:05:34.485786 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:05:34.485808 | orchestrator | Thursday 05 February 2026 05:04:48 +0000 (0:00:01.111) 0:24:32.909 ***** 2026-02-05 05:05:34.485827 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.485845 | orchestrator | 2026-02-05 05:05:34.485865 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:05:34.485886 | orchestrator | Thursday 05 February 2026 05:04:49 +0000 (0:00:01.121) 0:24:34.030 ***** 2026-02-05 05:05:34.485904 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.485921 | orchestrator | 2026-02-05 05:05:34.485933 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:05:34.485944 | orchestrator | Thursday 05 February 2026 05:04:50 +0000 (0:00:01.130) 0:24:35.161 ***** 2026-02-05 05:05:34.485955 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.485966 | orchestrator | 2026-02-05 05:05:34.485977 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:05:34.485988 | orchestrator | Thursday 05 February 2026 05:04:51 +0000 (0:00:01.135) 0:24:36.297 ***** 2026-02-05 05:05:34.485999 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486010 | orchestrator | 2026-02-05 05:05:34.486121 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:05:34.486135 | orchestrator | Thursday 05 February 2026 05:04:52 +0000 (0:00:01.101) 0:24:37.399 ***** 2026-02-05 05:05:34.486146 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486157 | orchestrator | 2026-02-05 05:05:34.486168 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:05:34.486180 | orchestrator | Thursday 05 February 2026 05:04:53 +0000 (0:00:01.114) 0:24:38.514 ***** 2026-02-05 05:05:34.486191 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486202 | orchestrator | 2026-02-05 05:05:34.486213 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:05:34.486224 | orchestrator | Thursday 05 February 2026 05:04:54 +0000 (0:00:01.118) 0:24:39.632 ***** 2026-02-05 05:05:34.486235 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486246 | orchestrator | 2026-02-05 05:05:34.486257 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:05:34.486268 | orchestrator | Thursday 05 February 2026 05:04:56 +0000 (0:00:01.630) 0:24:41.263 ***** 2026-02-05 05:05:34.486279 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486289 | orchestrator | 2026-02-05 05:05:34.486300 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:05:34.486311 | orchestrator | Thursday 05 February 2026 05:04:57 +0000 (0:00:01.113) 0:24:42.376 ***** 2026-02-05 05:05:34.486322 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486333 | orchestrator | 2026-02-05 05:05:34.486344 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:05:34.486355 | orchestrator | Thursday 05 February 2026 05:04:58 +0000 (0:00:01.206) 0:24:43.582 ***** 2026-02-05 05:05:34.486366 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486377 | orchestrator | 2026-02-05 05:05:34.486388 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:05:34.486399 | orchestrator | Thursday 05 February 2026 05:04:59 +0000 (0:00:01.101) 0:24:44.683 ***** 2026-02-05 05:05:34.486410 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486424 | orchestrator | 2026-02-05 05:05:34.486443 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:05:34.486463 | orchestrator | Thursday 05 February 2026 05:05:00 +0000 (0:00:01.108) 0:24:45.792 ***** 2026-02-05 05:05:34.486481 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486513 | orchestrator | 2026-02-05 05:05:34.486531 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:05:34.486564 | orchestrator | Thursday 05 February 2026 05:05:02 +0000 (0:00:01.128) 0:24:46.920 ***** 2026-02-05 05:05:34.486582 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486599 | orchestrator | 2026-02-05 05:05:34.486619 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:05:34.486639 | orchestrator | Thursday 05 February 2026 05:05:03 +0000 (0:00:01.123) 0:24:48.044 ***** 2026-02-05 05:05:34.486657 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486673 | orchestrator | 2026-02-05 05:05:34.486707 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:05:34.486719 | orchestrator | Thursday 05 February 2026 05:05:04 +0000 (0:00:01.085) 0:24:49.130 ***** 2026-02-05 05:05:34.486730 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486741 | orchestrator | 2026-02-05 05:05:34.486752 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:05:34.486763 | orchestrator | Thursday 05 February 2026 05:05:05 +0000 (0:00:01.149) 0:24:50.279 ***** 2026-02-05 05:05:34.486774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 05:05:34.486786 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 05:05:34.486797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-05 05:05:34.486808 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486818 | orchestrator | 2026-02-05 05:05:34.486830 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:05:34.486841 | orchestrator | Thursday 05 February 2026 05:05:06 +0000 (0:00:01.370) 0:24:51.650 ***** 2026-02-05 05:05:34.486851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 05:05:34.486862 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 05:05:34.486873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-05 05:05:34.486884 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486895 | orchestrator | 2026-02-05 05:05:34.486906 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:05:34.486917 | orchestrator | Thursday 05 February 2026 05:05:08 +0000 (0:00:01.403) 0:24:53.054 ***** 2026-02-05 05:05:34.486928 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 05:05:34.486939 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 05:05:34.486950 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-05 05:05:34.486960 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.486971 | orchestrator | 2026-02-05 05:05:34.486982 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:05:34.486993 | orchestrator | Thursday 05 February 2026 05:05:09 +0000 (0:00:01.724) 0:24:54.779 ***** 2026-02-05 05:05:34.487004 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.487015 | orchestrator | 2026-02-05 05:05:34.487026 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:05:34.487037 | orchestrator | Thursday 05 February 2026 05:05:11 +0000 (0:00:01.122) 0:24:55.901 ***** 2026-02-05 05:05:34.487049 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-05 05:05:34.487085 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.487097 | orchestrator | 2026-02-05 05:05:34.487108 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:05:34.487119 | orchestrator | Thursday 05 February 2026 05:05:12 +0000 (0:00:01.561) 0:24:57.462 ***** 2026-02-05 05:05:34.487130 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:05:34.487141 | orchestrator | 2026-02-05 05:05:34.487152 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-05 05:05:34.487163 | orchestrator | Thursday 05 February 2026 05:05:14 +0000 (0:00:01.678) 0:24:59.140 ***** 2026-02-05 05:05:34.487174 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 05:05:34.487195 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:05:34.487207 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:05:34.487218 | orchestrator | 2026-02-05 05:05:34.487229 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-05 05:05:34.487240 | orchestrator | Thursday 05 February 2026 05:05:15 +0000 (0:00:01.525) 0:25:00.666 ***** 2026-02-05 05:05:34.487251 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-02-05 05:05:34.487261 | orchestrator | 2026-02-05 05:05:34.487273 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-05 05:05:34.487283 | orchestrator | Thursday 05 February 2026 05:05:17 +0000 (0:00:01.394) 0:25:02.061 ***** 2026-02-05 05:05:34.487294 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:05:34.487305 | orchestrator | 2026-02-05 05:05:34.487316 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-05 05:05:34.487327 | orchestrator | Thursday 05 February 2026 05:05:18 +0000 (0:00:01.501) 0:25:03.562 ***** 2026-02-05 05:05:34.487338 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:05:34.487349 | orchestrator | 2026-02-05 05:05:34.487375 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-05 05:05:34.487398 | orchestrator | Thursday 05 February 2026 05:05:19 +0000 (0:00:01.150) 0:25:04.713 ***** 2026-02-05 05:05:34.487409 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-05 05:05:34.487420 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-05 05:05:34.487431 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-05 05:05:34.487442 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-05 05:05:34.487453 | orchestrator | 2026-02-05 05:05:34.487464 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-05 05:05:34.487474 | orchestrator | Thursday 05 February 2026 05:05:27 +0000 (0:00:08.040) 0:25:12.753 ***** 2026-02-05 05:05:34.487485 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:05:34.487496 | orchestrator | 2026-02-05 05:05:34.487507 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-05 05:05:34.487524 | orchestrator | Thursday 05 February 2026 05:05:29 +0000 (0:00:01.167) 0:25:13.920 ***** 2026-02-05 05:05:34.487536 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-05 05:05:34.487547 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 05:05:34.487558 | orchestrator | 2026-02-05 05:05:34.487569 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:05:34.487580 | orchestrator | Thursday 05 February 2026 05:05:32 +0000 (0:00:03.307) 0:25:17.228 ***** 2026-02-05 05:05:34.487598 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-05 05:06:31.228870 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-05 05:06:31.228995 | orchestrator | 2026-02-05 05:06:31.229009 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-05 05:06:31.229019 | orchestrator | Thursday 05 February 2026 05:05:34 +0000 (0:00:02.065) 0:25:19.293 ***** 2026-02-05 05:06:31.229027 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:06:31.229035 | orchestrator | 2026-02-05 05:06:31.229044 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-05 05:06:31.229053 | orchestrator | Thursday 05 February 2026 05:05:36 +0000 (0:00:01.560) 0:25:20.854 ***** 2026-02-05 05:06:31.229086 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:06:31.229100 | orchestrator | 2026-02-05 05:06:31.229109 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-05 05:06:31.229116 | orchestrator | Thursday 05 February 2026 05:05:37 +0000 (0:00:01.105) 0:25:21.960 ***** 2026-02-05 05:06:31.229124 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:06:31.229131 | orchestrator | 2026-02-05 05:06:31.229139 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-05 05:06:31.229146 | orchestrator | Thursday 05 February 2026 05:05:38 +0000 (0:00:01.150) 0:25:23.110 ***** 2026-02-05 05:06:31.229175 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-02-05 05:06:31.229183 | orchestrator | 2026-02-05 05:06:31.229190 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-05 05:06:31.229198 | orchestrator | Thursday 05 February 2026 05:05:39 +0000 (0:00:01.458) 0:25:24.569 ***** 2026-02-05 05:06:31.229205 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:06:31.229212 | orchestrator | 2026-02-05 05:06:31.229219 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-05 05:06:31.229227 | orchestrator | Thursday 05 February 2026 05:05:40 +0000 (0:00:01.130) 0:25:25.699 ***** 2026-02-05 05:06:31.229234 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:06:31.229241 | orchestrator | 2026-02-05 05:06:31.229248 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-05 05:06:31.229256 | orchestrator | Thursday 05 February 2026 05:05:42 +0000 (0:00:01.166) 0:25:26.866 ***** 2026-02-05 05:06:31.229263 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-02-05 05:06:31.229270 | orchestrator | 2026-02-05 05:06:31.229277 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-05 05:06:31.229284 | orchestrator | Thursday 05 February 2026 05:05:43 +0000 (0:00:01.470) 0:25:28.336 ***** 2026-02-05 05:06:31.229291 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:06:31.229299 | orchestrator | 2026-02-05 05:06:31.229306 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-05 05:06:31.229313 | orchestrator | Thursday 05 February 2026 05:05:45 +0000 (0:00:02.097) 0:25:30.434 ***** 2026-02-05 05:06:31.229320 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:06:31.229327 | orchestrator | 2026-02-05 05:06:31.229335 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-05 05:06:31.229343 | orchestrator | Thursday 05 February 2026 05:05:47 +0000 (0:00:01.928) 0:25:32.362 ***** 2026-02-05 05:06:31.229350 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:06:31.229357 | orchestrator | 2026-02-05 05:06:31.229365 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-05 05:06:31.229374 | orchestrator | Thursday 05 February 2026 05:05:50 +0000 (0:00:02.545) 0:25:34.908 ***** 2026-02-05 05:06:31.229383 | orchestrator | changed: [testbed-node-0] 2026-02-05 05:06:31.229392 | orchestrator | 2026-02-05 05:06:31.229400 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-05 05:06:31.229409 | orchestrator | Thursday 05 February 2026 05:05:54 +0000 (0:00:04.284) 0:25:39.193 ***** 2026-02-05 05:06:31.229418 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:06:31.229427 | orchestrator | 2026-02-05 05:06:31.229435 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-05 05:06:31.229444 | orchestrator | 2026-02-05 05:06:31.229453 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-05 05:06:31.229462 | orchestrator | Thursday 05 February 2026 05:05:55 +0000 (0:00:01.235) 0:25:40.428 ***** 2026-02-05 05:06:31.229470 | orchestrator | changed: [testbed-node-1] 2026-02-05 05:06:31.229479 | orchestrator | 2026-02-05 05:06:31.229488 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-05 05:06:31.229497 | orchestrator | Thursday 05 February 2026 05:06:08 +0000 (0:00:12.689) 0:25:53.118 ***** 2026-02-05 05:06:31.229506 | orchestrator | changed: [testbed-node-1] 2026-02-05 05:06:31.229513 | orchestrator | 2026-02-05 05:06:31.229520 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:06:31.229527 | orchestrator | Thursday 05 February 2026 05:06:10 +0000 (0:00:02.095) 0:25:55.213 ***** 2026-02-05 05:06:31.229534 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-05 05:06:31.229542 | orchestrator | 2026-02-05 05:06:31.229549 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 05:06:31.229556 | orchestrator | Thursday 05 February 2026 05:06:11 +0000 (0:00:01.104) 0:25:56.318 ***** 2026-02-05 05:06:31.229569 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:06:31.229576 | orchestrator | 2026-02-05 05:06:31.229583 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 05:06:31.229591 | orchestrator | Thursday 05 February 2026 05:06:12 +0000 (0:00:01.449) 0:25:57.768 ***** 2026-02-05 05:06:31.229598 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:06:31.229605 | orchestrator | 2026-02-05 05:06:31.229612 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:06:31.229632 | orchestrator | Thursday 05 February 2026 05:06:14 +0000 (0:00:01.115) 0:25:58.884 ***** 2026-02-05 05:06:31.229640 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:06:31.229647 | orchestrator | 2026-02-05 05:06:31.229654 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:06:31.229661 | orchestrator | Thursday 05 February 2026 05:06:15 +0000 (0:00:01.474) 0:26:00.359 ***** 2026-02-05 05:06:31.229669 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:06:31.229676 | orchestrator | 2026-02-05 05:06:31.229699 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 05:06:31.229707 | orchestrator | Thursday 05 February 2026 05:06:16 +0000 (0:00:01.107) 0:26:01.467 ***** 2026-02-05 05:06:31.229715 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:06:31.229722 | orchestrator | 2026-02-05 05:06:31.229729 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 05:06:31.229737 | orchestrator | Thursday 05 February 2026 05:06:17 +0000 (0:00:01.133) 0:26:02.600 ***** 2026-02-05 05:06:31.229744 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:06:31.229751 | orchestrator | 2026-02-05 05:06:31.229759 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 05:06:31.229766 | orchestrator | Thursday 05 February 2026 05:06:18 +0000 (0:00:01.135) 0:26:03.735 ***** 2026-02-05 05:06:31.229774 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:31.229781 | orchestrator | 2026-02-05 05:06:31.229788 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 05:06:31.229796 | orchestrator | Thursday 05 February 2026 05:06:20 +0000 (0:00:01.125) 0:26:04.861 ***** 2026-02-05 05:06:31.229803 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:06:31.229810 | orchestrator | 2026-02-05 05:06:31.229817 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 05:06:31.229825 | orchestrator | Thursday 05 February 2026 05:06:21 +0000 (0:00:01.176) 0:26:06.038 ***** 2026-02-05 05:06:31.229832 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:06:31.229839 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 05:06:31.229847 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:06:31.229854 | orchestrator | 2026-02-05 05:06:31.229862 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 05:06:31.229869 | orchestrator | Thursday 05 February 2026 05:06:22 +0000 (0:00:01.625) 0:26:07.664 ***** 2026-02-05 05:06:31.229876 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:06:31.229883 | orchestrator | 2026-02-05 05:06:31.229891 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 05:06:31.229898 | orchestrator | Thursday 05 February 2026 05:06:24 +0000 (0:00:01.237) 0:26:08.901 ***** 2026-02-05 05:06:31.229905 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:06:31.229913 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 05:06:31.229920 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:06:31.229927 | orchestrator | 2026-02-05 05:06:31.229935 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 05:06:31.229942 | orchestrator | Thursday 05 February 2026 05:06:27 +0000 (0:00:02.929) 0:26:11.830 ***** 2026-02-05 05:06:31.229949 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 05:06:31.229962 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 05:06:31.229969 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 05:06:31.229977 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:31.229984 | orchestrator | 2026-02-05 05:06:31.229991 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 05:06:31.229998 | orchestrator | Thursday 05 February 2026 05:06:28 +0000 (0:00:01.457) 0:26:13.288 ***** 2026-02-05 05:06:31.230007 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 05:06:31.230130 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 05:06:31.230144 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 05:06:31.230151 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:31.230161 | orchestrator | 2026-02-05 05:06:31.230174 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 05:06:31.230186 | orchestrator | Thursday 05 February 2026 05:06:30 +0000 (0:00:01.582) 0:26:14.870 ***** 2026-02-05 05:06:31.230208 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:06:31.230232 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:06:31.230256 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:06:50.697766 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:50.697873 | orchestrator | 2026-02-05 05:06:50.697889 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 05:06:50.697903 | orchestrator | Thursday 05 February 2026 05:06:31 +0000 (0:00:01.165) 0:26:16.035 ***** 2026-02-05 05:06:50.697915 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 05:06:24.645974', 'end': '2026-02-05 05:06:24.699684', 'delta': '0:00:00.053710', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 05:06:50.697956 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 05:06:25.259279', 'end': '2026-02-05 05:06:25.300946', 'delta': '0:00:00.041667', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 05:06:50.697968 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '9163e99c5c4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 05:06:25.834769', 'end': '2026-02-05 05:06:25.891388', 'delta': '0:00:00.056619', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9163e99c5c4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 05:06:50.697978 | orchestrator | 2026-02-05 05:06:50.697989 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 05:06:50.697999 | orchestrator | Thursday 05 February 2026 05:06:32 +0000 (0:00:01.295) 0:26:17.331 ***** 2026-02-05 05:06:50.698008 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:06:50.698096 | orchestrator | 2026-02-05 05:06:50.698108 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 05:06:50.698118 | orchestrator | Thursday 05 February 2026 05:06:33 +0000 (0:00:01.232) 0:26:18.564 ***** 2026-02-05 05:06:50.698128 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:50.698138 | orchestrator | 2026-02-05 05:06:50.698147 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 05:06:50.698157 | orchestrator | Thursday 05 February 2026 05:06:35 +0000 (0:00:01.276) 0:26:19.841 ***** 2026-02-05 05:06:50.698167 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:06:50.698176 | orchestrator | 2026-02-05 05:06:50.698186 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 05:06:50.698196 | orchestrator | Thursday 05 February 2026 05:06:36 +0000 (0:00:01.163) 0:26:21.004 ***** 2026-02-05 05:06:50.698206 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:06:50.698226 | orchestrator | 2026-02-05 05:06:50.698250 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:06:50.698260 | orchestrator | Thursday 05 February 2026 05:06:38 +0000 (0:00:02.003) 0:26:23.008 ***** 2026-02-05 05:06:50.698270 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:06:50.698279 | orchestrator | 2026-02-05 05:06:50.698289 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 05:06:50.698301 | orchestrator | Thursday 05 February 2026 05:06:39 +0000 (0:00:01.127) 0:26:24.135 ***** 2026-02-05 05:06:50.698312 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:50.698324 | orchestrator | 2026-02-05 05:06:50.698335 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 05:06:50.698347 | orchestrator | Thursday 05 February 2026 05:06:40 +0000 (0:00:01.105) 0:26:25.241 ***** 2026-02-05 05:06:50.698358 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:50.698369 | orchestrator | 2026-02-05 05:06:50.698381 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:06:50.698393 | orchestrator | Thursday 05 February 2026 05:06:41 +0000 (0:00:01.192) 0:26:26.434 ***** 2026-02-05 05:06:50.698405 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:50.698424 | orchestrator | 2026-02-05 05:06:50.698454 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 05:06:50.698466 | orchestrator | Thursday 05 February 2026 05:06:42 +0000 (0:00:01.140) 0:26:27.574 ***** 2026-02-05 05:06:50.698478 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:50.698490 | orchestrator | 2026-02-05 05:06:50.698502 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 05:06:50.698513 | orchestrator | Thursday 05 February 2026 05:06:43 +0000 (0:00:01.102) 0:26:28.677 ***** 2026-02-05 05:06:50.698524 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:50.698536 | orchestrator | 2026-02-05 05:06:50.698548 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 05:06:50.698559 | orchestrator | Thursday 05 February 2026 05:06:44 +0000 (0:00:01.097) 0:26:29.774 ***** 2026-02-05 05:06:50.698571 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:50.698583 | orchestrator | 2026-02-05 05:06:50.698595 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 05:06:50.698607 | orchestrator | Thursday 05 February 2026 05:06:46 +0000 (0:00:01.112) 0:26:30.887 ***** 2026-02-05 05:06:50.698618 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:50.698630 | orchestrator | 2026-02-05 05:06:50.698640 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 05:06:50.698649 | orchestrator | Thursday 05 February 2026 05:06:47 +0000 (0:00:01.118) 0:26:32.005 ***** 2026-02-05 05:06:50.698659 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:50.698669 | orchestrator | 2026-02-05 05:06:50.698678 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 05:06:50.698689 | orchestrator | Thursday 05 February 2026 05:06:48 +0000 (0:00:01.107) 0:26:33.113 ***** 2026-02-05 05:06:50.698699 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:50.698709 | orchestrator | 2026-02-05 05:06:50.698718 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 05:06:50.698728 | orchestrator | Thursday 05 February 2026 05:06:49 +0000 (0:00:01.137) 0:26:34.251 ***** 2026-02-05 05:06:50.698739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:06:50.698752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:06:50.698767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:06:50.698785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-36-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:06:50.698823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:06:50.698850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:06:50.698879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:06:51.940387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91e0d2c4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:06:51.940498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:06:51.940536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:06:51.940567 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:06:51.940579 | orchestrator | 2026-02-05 05:06:51.940589 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 05:06:51.940598 | orchestrator | Thursday 05 February 2026 05:06:50 +0000 (0:00:01.253) 0:26:35.504 ***** 2026-02-05 05:06:51.940610 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:06:51.940640 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:06:51.940651 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:06:51.940666 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-36-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:06:51.940682 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:06:51.940712 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:06:51.940728 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:06:51.940757 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91e0d2c4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1', 'scsi-SQEMU_QEMU_HARDDISK_91e0d2c4-9998-4651-b894-475b8cd3188f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:07:25.870697 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:07:25.870806 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:07:25.870814 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.870820 | orchestrator | 2026-02-05 05:07:25.870826 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 05:07:25.870831 | orchestrator | Thursday 05 February 2026 05:06:51 +0000 (0:00:01.246) 0:26:36.750 ***** 2026-02-05 05:07:25.870835 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:07:25.870840 | orchestrator | 2026-02-05 05:07:25.870844 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 05:07:25.870848 | orchestrator | Thursday 05 February 2026 05:06:53 +0000 (0:00:01.505) 0:26:38.256 ***** 2026-02-05 05:07:25.870852 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:07:25.870856 | orchestrator | 2026-02-05 05:07:25.870860 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:07:25.870864 | orchestrator | Thursday 05 February 2026 05:06:54 +0000 (0:00:01.111) 0:26:39.368 ***** 2026-02-05 05:07:25.870868 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:07:25.870872 | orchestrator | 2026-02-05 05:07:25.870875 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:07:25.870880 | orchestrator | Thursday 05 February 2026 05:06:56 +0000 (0:00:01.543) 0:26:40.912 ***** 2026-02-05 05:07:25.870883 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.870887 | orchestrator | 2026-02-05 05:07:25.870891 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:07:25.870895 | orchestrator | Thursday 05 February 2026 05:06:57 +0000 (0:00:01.126) 0:26:42.039 ***** 2026-02-05 05:07:25.870899 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.870903 | orchestrator | 2026-02-05 05:07:25.870907 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:07:25.870911 | orchestrator | Thursday 05 February 2026 05:06:58 +0000 (0:00:01.210) 0:26:43.249 ***** 2026-02-05 05:07:25.870915 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.870919 | orchestrator | 2026-02-05 05:07:25.870923 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 05:07:25.870927 | orchestrator | Thursday 05 February 2026 05:06:59 +0000 (0:00:01.126) 0:26:44.375 ***** 2026-02-05 05:07:25.870931 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-05 05:07:25.870935 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 05:07:25.870939 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-05 05:07:25.870943 | orchestrator | 2026-02-05 05:07:25.870947 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 05:07:25.870951 | orchestrator | Thursday 05 February 2026 05:07:01 +0000 (0:00:01.632) 0:26:46.008 ***** 2026-02-05 05:07:25.870955 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 05:07:25.870960 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 05:07:25.870963 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 05:07:25.870968 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.870972 | orchestrator | 2026-02-05 05:07:25.870976 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 05:07:25.870980 | orchestrator | Thursday 05 February 2026 05:07:02 +0000 (0:00:01.226) 0:26:47.235 ***** 2026-02-05 05:07:25.870987 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.870991 | orchestrator | 2026-02-05 05:07:25.870996 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 05:07:25.871000 | orchestrator | Thursday 05 February 2026 05:07:03 +0000 (0:00:01.164) 0:26:48.400 ***** 2026-02-05 05:07:25.871004 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:07:25.871008 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 05:07:25.871012 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:07:25.871016 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:07:25.871020 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:07:25.871024 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:07:25.871039 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:07:25.871043 | orchestrator | 2026-02-05 05:07:25.871047 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 05:07:25.871051 | orchestrator | Thursday 05 February 2026 05:07:05 +0000 (0:00:02.093) 0:26:50.493 ***** 2026-02-05 05:07:25.871055 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:07:25.871059 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 05:07:25.871098 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:07:25.871103 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:07:25.871107 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:07:25.871111 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:07:25.871115 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:07:25.871118 | orchestrator | 2026-02-05 05:07:25.871122 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:07:25.871126 | orchestrator | Thursday 05 February 2026 05:07:07 +0000 (0:00:02.211) 0:26:52.705 ***** 2026-02-05 05:07:25.871130 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-05 05:07:25.871135 | orchestrator | 2026-02-05 05:07:25.871141 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 05:07:25.871145 | orchestrator | Thursday 05 February 2026 05:07:08 +0000 (0:00:01.102) 0:26:53.807 ***** 2026-02-05 05:07:25.871149 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-05 05:07:25.871153 | orchestrator | 2026-02-05 05:07:25.871157 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 05:07:25.871161 | orchestrator | Thursday 05 February 2026 05:07:10 +0000 (0:00:01.189) 0:26:54.997 ***** 2026-02-05 05:07:25.871164 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:07:25.871168 | orchestrator | 2026-02-05 05:07:25.871172 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 05:07:25.871176 | orchestrator | Thursday 05 February 2026 05:07:11 +0000 (0:00:01.512) 0:26:56.509 ***** 2026-02-05 05:07:25.871179 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.871183 | orchestrator | 2026-02-05 05:07:25.871187 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 05:07:25.871191 | orchestrator | Thursday 05 February 2026 05:07:12 +0000 (0:00:01.135) 0:26:57.644 ***** 2026-02-05 05:07:25.871194 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.871198 | orchestrator | 2026-02-05 05:07:25.871202 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 05:07:25.871210 | orchestrator | Thursday 05 February 2026 05:07:13 +0000 (0:00:01.102) 0:26:58.747 ***** 2026-02-05 05:07:25.871213 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.871217 | orchestrator | 2026-02-05 05:07:25.871221 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 05:07:25.871225 | orchestrator | Thursday 05 February 2026 05:07:15 +0000 (0:00:01.100) 0:26:59.848 ***** 2026-02-05 05:07:25.871228 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:07:25.871232 | orchestrator | 2026-02-05 05:07:25.871236 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 05:07:25.871240 | orchestrator | Thursday 05 February 2026 05:07:16 +0000 (0:00:01.525) 0:27:01.374 ***** 2026-02-05 05:07:25.871243 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.871247 | orchestrator | 2026-02-05 05:07:25.871251 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 05:07:25.871255 | orchestrator | Thursday 05 February 2026 05:07:17 +0000 (0:00:01.152) 0:27:02.527 ***** 2026-02-05 05:07:25.871259 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.871263 | orchestrator | 2026-02-05 05:07:25.871268 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 05:07:25.871272 | orchestrator | Thursday 05 February 2026 05:07:18 +0000 (0:00:01.155) 0:27:03.682 ***** 2026-02-05 05:07:25.871277 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:07:25.871281 | orchestrator | 2026-02-05 05:07:25.871285 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 05:07:25.871290 | orchestrator | Thursday 05 February 2026 05:07:20 +0000 (0:00:01.530) 0:27:05.213 ***** 2026-02-05 05:07:25.871294 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:07:25.871299 | orchestrator | 2026-02-05 05:07:25.871303 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 05:07:25.871307 | orchestrator | Thursday 05 February 2026 05:07:21 +0000 (0:00:01.522) 0:27:06.735 ***** 2026-02-05 05:07:25.871312 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.871316 | orchestrator | 2026-02-05 05:07:25.871320 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:07:25.871325 | orchestrator | Thursday 05 February 2026 05:07:22 +0000 (0:00:00.776) 0:27:07.512 ***** 2026-02-05 05:07:25.871329 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:07:25.871333 | orchestrator | 2026-02-05 05:07:25.871338 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:07:25.871342 | orchestrator | Thursday 05 February 2026 05:07:23 +0000 (0:00:00.822) 0:27:08.335 ***** 2026-02-05 05:07:25.871346 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.871351 | orchestrator | 2026-02-05 05:07:25.871355 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:07:25.871359 | orchestrator | Thursday 05 February 2026 05:07:24 +0000 (0:00:00.778) 0:27:09.113 ***** 2026-02-05 05:07:25.871364 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:07:25.871368 | orchestrator | 2026-02-05 05:07:25.871373 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:07:25.871378 | orchestrator | Thursday 05 February 2026 05:07:25 +0000 (0:00:00.754) 0:27:09.868 ***** 2026-02-05 05:07:25.871385 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907010 | orchestrator | 2026-02-05 05:08:05.907146 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:08:05.907161 | orchestrator | Thursday 05 February 2026 05:07:25 +0000 (0:00:00.811) 0:27:10.680 ***** 2026-02-05 05:08:05.907178 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907184 | orchestrator | 2026-02-05 05:08:05.907189 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:08:05.907193 | orchestrator | Thursday 05 February 2026 05:07:26 +0000 (0:00:00.778) 0:27:11.458 ***** 2026-02-05 05:08:05.907197 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907202 | orchestrator | 2026-02-05 05:08:05.907206 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:08:05.907229 | orchestrator | Thursday 05 February 2026 05:07:27 +0000 (0:00:00.762) 0:27:12.220 ***** 2026-02-05 05:08:05.907234 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:08:05.907239 | orchestrator | 2026-02-05 05:08:05.907243 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:08:05.907247 | orchestrator | Thursday 05 February 2026 05:07:28 +0000 (0:00:00.784) 0:27:13.005 ***** 2026-02-05 05:08:05.907252 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:08:05.907255 | orchestrator | 2026-02-05 05:08:05.907259 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:08:05.907263 | orchestrator | Thursday 05 February 2026 05:07:28 +0000 (0:00:00.767) 0:27:13.773 ***** 2026-02-05 05:08:05.907267 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:08:05.907271 | orchestrator | 2026-02-05 05:08:05.907275 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:08:05.907289 | orchestrator | Thursday 05 February 2026 05:07:29 +0000 (0:00:00.777) 0:27:14.551 ***** 2026-02-05 05:08:05.907293 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907297 | orchestrator | 2026-02-05 05:08:05.907301 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:08:05.907305 | orchestrator | Thursday 05 February 2026 05:07:30 +0000 (0:00:00.756) 0:27:15.307 ***** 2026-02-05 05:08:05.907310 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907314 | orchestrator | 2026-02-05 05:08:05.907318 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:08:05.907322 | orchestrator | Thursday 05 February 2026 05:07:31 +0000 (0:00:00.764) 0:27:16.072 ***** 2026-02-05 05:08:05.907325 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907329 | orchestrator | 2026-02-05 05:08:05.907333 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:08:05.907337 | orchestrator | Thursday 05 February 2026 05:07:32 +0000 (0:00:00.765) 0:27:16.837 ***** 2026-02-05 05:08:05.907341 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907345 | orchestrator | 2026-02-05 05:08:05.907348 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:08:05.907352 | orchestrator | Thursday 05 February 2026 05:07:32 +0000 (0:00:00.777) 0:27:17.615 ***** 2026-02-05 05:08:05.907356 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907360 | orchestrator | 2026-02-05 05:08:05.907364 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:08:05.907368 | orchestrator | Thursday 05 February 2026 05:07:33 +0000 (0:00:00.768) 0:27:18.384 ***** 2026-02-05 05:08:05.907371 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907375 | orchestrator | 2026-02-05 05:08:05.907379 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:08:05.907383 | orchestrator | Thursday 05 February 2026 05:07:34 +0000 (0:00:00.777) 0:27:19.161 ***** 2026-02-05 05:08:05.907387 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907390 | orchestrator | 2026-02-05 05:08:05.907394 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:08:05.907399 | orchestrator | Thursday 05 February 2026 05:07:35 +0000 (0:00:00.761) 0:27:19.923 ***** 2026-02-05 05:08:05.907403 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907407 | orchestrator | 2026-02-05 05:08:05.907410 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:08:05.907414 | orchestrator | Thursday 05 February 2026 05:07:35 +0000 (0:00:00.786) 0:27:20.710 ***** 2026-02-05 05:08:05.907418 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907422 | orchestrator | 2026-02-05 05:08:05.907425 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:08:05.907429 | orchestrator | Thursday 05 February 2026 05:07:36 +0000 (0:00:00.760) 0:27:21.471 ***** 2026-02-05 05:08:05.907433 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907437 | orchestrator | 2026-02-05 05:08:05.907441 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:08:05.907448 | orchestrator | Thursday 05 February 2026 05:07:37 +0000 (0:00:00.749) 0:27:22.220 ***** 2026-02-05 05:08:05.907452 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907456 | orchestrator | 2026-02-05 05:08:05.907460 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:08:05.907464 | orchestrator | Thursday 05 February 2026 05:07:38 +0000 (0:00:00.764) 0:27:22.985 ***** 2026-02-05 05:08:05.907467 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907471 | orchestrator | 2026-02-05 05:08:05.907475 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:08:05.907479 | orchestrator | Thursday 05 February 2026 05:07:38 +0000 (0:00:00.768) 0:27:23.754 ***** 2026-02-05 05:08:05.907483 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:08:05.907486 | orchestrator | 2026-02-05 05:08:05.907490 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:08:05.907494 | orchestrator | Thursday 05 February 2026 05:07:40 +0000 (0:00:01.633) 0:27:25.387 ***** 2026-02-05 05:08:05.907498 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:08:05.907502 | orchestrator | 2026-02-05 05:08:05.907506 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:08:05.907509 | orchestrator | Thursday 05 February 2026 05:07:42 +0000 (0:00:02.199) 0:27:27.586 ***** 2026-02-05 05:08:05.907514 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-05 05:08:05.907518 | orchestrator | 2026-02-05 05:08:05.907535 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 05:08:05.907539 | orchestrator | Thursday 05 February 2026 05:07:43 +0000 (0:00:01.153) 0:27:28.740 ***** 2026-02-05 05:08:05.907543 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907546 | orchestrator | 2026-02-05 05:08:05.907550 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 05:08:05.907555 | orchestrator | Thursday 05 February 2026 05:07:45 +0000 (0:00:01.130) 0:27:29.870 ***** 2026-02-05 05:08:05.907560 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907565 | orchestrator | 2026-02-05 05:08:05.907569 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 05:08:05.907574 | orchestrator | Thursday 05 February 2026 05:07:46 +0000 (0:00:01.173) 0:27:31.044 ***** 2026-02-05 05:08:05.907578 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 05:08:05.907583 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 05:08:05.907588 | orchestrator | 2026-02-05 05:08:05.907592 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 05:08:05.907597 | orchestrator | Thursday 05 February 2026 05:07:48 +0000 (0:00:01.876) 0:27:32.921 ***** 2026-02-05 05:08:05.907602 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:08:05.907606 | orchestrator | 2026-02-05 05:08:05.907611 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 05:08:05.907616 | orchestrator | Thursday 05 February 2026 05:07:49 +0000 (0:00:01.518) 0:27:34.439 ***** 2026-02-05 05:08:05.907623 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907628 | orchestrator | 2026-02-05 05:08:05.907632 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 05:08:05.907637 | orchestrator | Thursday 05 February 2026 05:07:50 +0000 (0:00:01.128) 0:27:35.568 ***** 2026-02-05 05:08:05.907642 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907646 | orchestrator | 2026-02-05 05:08:05.907650 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:08:05.907655 | orchestrator | Thursday 05 February 2026 05:07:51 +0000 (0:00:00.780) 0:27:36.348 ***** 2026-02-05 05:08:05.907660 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907664 | orchestrator | 2026-02-05 05:08:05.907669 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:08:05.907674 | orchestrator | Thursday 05 February 2026 05:07:52 +0000 (0:00:00.765) 0:27:37.113 ***** 2026-02-05 05:08:05.907683 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-05 05:08:05.907687 | orchestrator | 2026-02-05 05:08:05.907692 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 05:08:05.907697 | orchestrator | Thursday 05 February 2026 05:07:53 +0000 (0:00:01.133) 0:27:38.247 ***** 2026-02-05 05:08:05.907701 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:08:05.907706 | orchestrator | 2026-02-05 05:08:05.907711 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 05:08:05.907716 | orchestrator | Thursday 05 February 2026 05:07:55 +0000 (0:00:01.788) 0:27:40.036 ***** 2026-02-05 05:08:05.907722 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 05:08:05.907727 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 05:08:05.907732 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 05:08:05.907736 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907741 | orchestrator | 2026-02-05 05:08:05.907746 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 05:08:05.907751 | orchestrator | Thursday 05 February 2026 05:07:56 +0000 (0:00:01.146) 0:27:41.182 ***** 2026-02-05 05:08:05.907755 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907759 | orchestrator | 2026-02-05 05:08:05.907764 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 05:08:05.907768 | orchestrator | Thursday 05 February 2026 05:07:57 +0000 (0:00:01.112) 0:27:42.295 ***** 2026-02-05 05:08:05.907773 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907778 | orchestrator | 2026-02-05 05:08:05.907782 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 05:08:05.907786 | orchestrator | Thursday 05 February 2026 05:07:58 +0000 (0:00:01.171) 0:27:43.467 ***** 2026-02-05 05:08:05.907791 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907796 | orchestrator | 2026-02-05 05:08:05.907801 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 05:08:05.907805 | orchestrator | Thursday 05 February 2026 05:07:59 +0000 (0:00:01.125) 0:27:44.592 ***** 2026-02-05 05:08:05.907810 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907815 | orchestrator | 2026-02-05 05:08:05.907819 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 05:08:05.907824 | orchestrator | Thursday 05 February 2026 05:08:00 +0000 (0:00:01.170) 0:27:45.763 ***** 2026-02-05 05:08:05.907829 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:05.907833 | orchestrator | 2026-02-05 05:08:05.907838 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:08:05.907842 | orchestrator | Thursday 05 February 2026 05:08:01 +0000 (0:00:00.791) 0:27:46.555 ***** 2026-02-05 05:08:05.907846 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:08:05.907851 | orchestrator | 2026-02-05 05:08:05.907855 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:08:05.907860 | orchestrator | Thursday 05 February 2026 05:08:03 +0000 (0:00:02.253) 0:27:48.809 ***** 2026-02-05 05:08:05.907864 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:08:05.907869 | orchestrator | 2026-02-05 05:08:05.907874 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:08:05.907878 | orchestrator | Thursday 05 February 2026 05:08:04 +0000 (0:00:00.771) 0:27:49.580 ***** 2026-02-05 05:08:05.907883 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-05 05:08:05.907888 | orchestrator | 2026-02-05 05:08:05.907896 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 05:08:42.800191 | orchestrator | Thursday 05 February 2026 05:08:05 +0000 (0:00:01.133) 0:27:50.713 ***** 2026-02-05 05:08:42.800315 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.800333 | orchestrator | 2026-02-05 05:08:42.800347 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 05:08:42.800382 | orchestrator | Thursday 05 February 2026 05:08:07 +0000 (0:00:01.164) 0:27:51.878 ***** 2026-02-05 05:08:42.800394 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.800405 | orchestrator | 2026-02-05 05:08:42.800417 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 05:08:42.800428 | orchestrator | Thursday 05 February 2026 05:08:08 +0000 (0:00:01.127) 0:27:53.006 ***** 2026-02-05 05:08:42.800439 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.800450 | orchestrator | 2026-02-05 05:08:42.800461 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 05:08:42.800472 | orchestrator | Thursday 05 February 2026 05:08:09 +0000 (0:00:01.131) 0:27:54.137 ***** 2026-02-05 05:08:42.800483 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.800494 | orchestrator | 2026-02-05 05:08:42.800505 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 05:08:42.800516 | orchestrator | Thursday 05 February 2026 05:08:10 +0000 (0:00:01.145) 0:27:55.283 ***** 2026-02-05 05:08:42.800527 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.800538 | orchestrator | 2026-02-05 05:08:42.800549 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 05:08:42.800575 | orchestrator | Thursday 05 February 2026 05:08:11 +0000 (0:00:01.207) 0:27:56.491 ***** 2026-02-05 05:08:42.800587 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.800598 | orchestrator | 2026-02-05 05:08:42.800609 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 05:08:42.800620 | orchestrator | Thursday 05 February 2026 05:08:12 +0000 (0:00:01.126) 0:27:57.618 ***** 2026-02-05 05:08:42.800631 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.800642 | orchestrator | 2026-02-05 05:08:42.800653 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 05:08:42.800664 | orchestrator | Thursday 05 February 2026 05:08:13 +0000 (0:00:01.134) 0:27:58.753 ***** 2026-02-05 05:08:42.800675 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.800685 | orchestrator | 2026-02-05 05:08:42.800697 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 05:08:42.800709 | orchestrator | Thursday 05 February 2026 05:08:15 +0000 (0:00:01.137) 0:27:59.891 ***** 2026-02-05 05:08:42.800723 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:08:42.800737 | orchestrator | 2026-02-05 05:08:42.800750 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:08:42.800763 | orchestrator | Thursday 05 February 2026 05:08:15 +0000 (0:00:00.857) 0:28:00.748 ***** 2026-02-05 05:08:42.800777 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-05 05:08:42.800791 | orchestrator | 2026-02-05 05:08:42.800805 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 05:08:42.800818 | orchestrator | Thursday 05 February 2026 05:08:17 +0000 (0:00:01.113) 0:28:01.862 ***** 2026-02-05 05:08:42.800830 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-05 05:08:42.800844 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-05 05:08:42.800857 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-05 05:08:42.800870 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-05 05:08:42.800883 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-05 05:08:42.800896 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-05 05:08:42.800909 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-05 05:08:42.800922 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-05 05:08:42.800936 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 05:08:42.800948 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 05:08:42.800961 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 05:08:42.800982 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 05:08:42.800995 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 05:08:42.801008 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 05:08:42.801021 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-05 05:08:42.801034 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-05 05:08:42.801046 | orchestrator | 2026-02-05 05:08:42.801059 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:08:42.801070 | orchestrator | Thursday 05 February 2026 05:08:23 +0000 (0:00:06.537) 0:28:08.399 ***** 2026-02-05 05:08:42.801081 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801092 | orchestrator | 2026-02-05 05:08:42.801103 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:08:42.801209 | orchestrator | Thursday 05 February 2026 05:08:24 +0000 (0:00:00.791) 0:28:09.191 ***** 2026-02-05 05:08:42.801222 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801232 | orchestrator | 2026-02-05 05:08:42.801243 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:08:42.801254 | orchestrator | Thursday 05 February 2026 05:08:25 +0000 (0:00:00.754) 0:28:09.945 ***** 2026-02-05 05:08:42.801265 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801276 | orchestrator | 2026-02-05 05:08:42.801287 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:08:42.801298 | orchestrator | Thursday 05 February 2026 05:08:25 +0000 (0:00:00.798) 0:28:10.744 ***** 2026-02-05 05:08:42.801309 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801319 | orchestrator | 2026-02-05 05:08:42.801330 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:08:42.801360 | orchestrator | Thursday 05 February 2026 05:08:26 +0000 (0:00:00.767) 0:28:11.512 ***** 2026-02-05 05:08:42.801372 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801384 | orchestrator | 2026-02-05 05:08:42.801395 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:08:42.801406 | orchestrator | Thursday 05 February 2026 05:08:27 +0000 (0:00:00.782) 0:28:12.294 ***** 2026-02-05 05:08:42.801416 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801427 | orchestrator | 2026-02-05 05:08:42.801438 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:08:42.801450 | orchestrator | Thursday 05 February 2026 05:08:28 +0000 (0:00:00.771) 0:28:13.066 ***** 2026-02-05 05:08:42.801461 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801471 | orchestrator | 2026-02-05 05:08:42.801482 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:08:42.801493 | orchestrator | Thursday 05 February 2026 05:08:29 +0000 (0:00:00.778) 0:28:13.844 ***** 2026-02-05 05:08:42.801504 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801515 | orchestrator | 2026-02-05 05:08:42.801526 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:08:42.801537 | orchestrator | Thursday 05 February 2026 05:08:29 +0000 (0:00:00.766) 0:28:14.611 ***** 2026-02-05 05:08:42.801548 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801559 | orchestrator | 2026-02-05 05:08:42.801576 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:08:42.801588 | orchestrator | Thursday 05 February 2026 05:08:30 +0000 (0:00:00.788) 0:28:15.400 ***** 2026-02-05 05:08:42.801599 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801609 | orchestrator | 2026-02-05 05:08:42.801620 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:08:42.801631 | orchestrator | Thursday 05 February 2026 05:08:31 +0000 (0:00:00.775) 0:28:16.175 ***** 2026-02-05 05:08:42.801642 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801653 | orchestrator | 2026-02-05 05:08:42.801671 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:08:42.801681 | orchestrator | Thursday 05 February 2026 05:08:32 +0000 (0:00:00.758) 0:28:16.933 ***** 2026-02-05 05:08:42.801691 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801700 | orchestrator | 2026-02-05 05:08:42.801710 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:08:42.801720 | orchestrator | Thursday 05 February 2026 05:08:32 +0000 (0:00:00.802) 0:28:17.736 ***** 2026-02-05 05:08:42.801730 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801739 | orchestrator | 2026-02-05 05:08:42.801749 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:08:42.801759 | orchestrator | Thursday 05 February 2026 05:08:33 +0000 (0:00:00.885) 0:28:18.622 ***** 2026-02-05 05:08:42.801768 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801778 | orchestrator | 2026-02-05 05:08:42.801787 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:08:42.801797 | orchestrator | Thursday 05 February 2026 05:08:34 +0000 (0:00:00.762) 0:28:19.385 ***** 2026-02-05 05:08:42.801807 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801816 | orchestrator | 2026-02-05 05:08:42.801826 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:08:42.801836 | orchestrator | Thursday 05 February 2026 05:08:35 +0000 (0:00:00.849) 0:28:20.235 ***** 2026-02-05 05:08:42.801845 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801855 | orchestrator | 2026-02-05 05:08:42.801865 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:08:42.801874 | orchestrator | Thursday 05 February 2026 05:08:36 +0000 (0:00:00.773) 0:28:21.009 ***** 2026-02-05 05:08:42.801884 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801894 | orchestrator | 2026-02-05 05:08:42.801904 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:08:42.801915 | orchestrator | Thursday 05 February 2026 05:08:36 +0000 (0:00:00.753) 0:28:21.762 ***** 2026-02-05 05:08:42.801924 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801934 | orchestrator | 2026-02-05 05:08:42.801944 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:08:42.801953 | orchestrator | Thursday 05 February 2026 05:08:37 +0000 (0:00:00.767) 0:28:22.530 ***** 2026-02-05 05:08:42.801963 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.801972 | orchestrator | 2026-02-05 05:08:42.801982 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:08:42.801992 | orchestrator | Thursday 05 February 2026 05:08:38 +0000 (0:00:00.765) 0:28:23.296 ***** 2026-02-05 05:08:42.802002 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.802011 | orchestrator | 2026-02-05 05:08:42.802088 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:08:42.802125 | orchestrator | Thursday 05 February 2026 05:08:39 +0000 (0:00:00.752) 0:28:24.048 ***** 2026-02-05 05:08:42.802142 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.802157 | orchestrator | 2026-02-05 05:08:42.802173 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:08:42.802188 | orchestrator | Thursday 05 February 2026 05:08:39 +0000 (0:00:00.774) 0:28:24.823 ***** 2026-02-05 05:08:42.802204 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-05 05:08:42.802231 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-05 05:08:42.802248 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-05 05:08:42.802266 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:08:42.802282 | orchestrator | 2026-02-05 05:08:42.802298 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:08:42.802312 | orchestrator | Thursday 05 February 2026 05:08:41 +0000 (0:00:01.414) 0:28:26.237 ***** 2026-02-05 05:08:42.802322 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-05 05:08:42.802351 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-05 05:09:40.821833 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-05 05:09:40.821924 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:09:40.821933 | orchestrator | 2026-02-05 05:09:40.821940 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:09:40.821948 | orchestrator | Thursday 05 February 2026 05:08:42 +0000 (0:00:01.370) 0:28:27.607 ***** 2026-02-05 05:09:40.821954 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-05 05:09:40.821960 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-05 05:09:40.821965 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-05 05:09:40.821971 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:09:40.821976 | orchestrator | 2026-02-05 05:09:40.821982 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:09:40.821988 | orchestrator | Thursday 05 February 2026 05:08:43 +0000 (0:00:01.065) 0:28:28.673 ***** 2026-02-05 05:09:40.821993 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:09:40.821998 | orchestrator | 2026-02-05 05:09:40.822004 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:09:40.822009 | orchestrator | Thursday 05 February 2026 05:08:44 +0000 (0:00:00.793) 0:28:29.467 ***** 2026-02-05 05:09:40.822049 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-05 05:09:40.822056 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:09:40.822061 | orchestrator | 2026-02-05 05:09:40.822078 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:09:40.822084 | orchestrator | Thursday 05 February 2026 05:08:45 +0000 (0:00:00.909) 0:28:30.377 ***** 2026-02-05 05:09:40.822089 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:09:40.822095 | orchestrator | 2026-02-05 05:09:40.822100 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-05 05:09:40.822105 | orchestrator | Thursday 05 February 2026 05:08:47 +0000 (0:00:01.463) 0:28:31.841 ***** 2026-02-05 05:09:40.822110 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:09:40.822139 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 05:09:40.822145 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:09:40.822150 | orchestrator | 2026-02-05 05:09:40.822155 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-05 05:09:40.822160 | orchestrator | Thursday 05 February 2026 05:08:48 +0000 (0:00:01.280) 0:28:33.122 ***** 2026-02-05 05:09:40.822165 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-02-05 05:09:40.822171 | orchestrator | 2026-02-05 05:09:40.822176 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-05 05:09:40.822181 | orchestrator | Thursday 05 February 2026 05:08:49 +0000 (0:00:01.087) 0:28:34.210 ***** 2026-02-05 05:09:40.822186 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:09:40.822197 | orchestrator | 2026-02-05 05:09:40.822203 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-05 05:09:40.822208 | orchestrator | Thursday 05 February 2026 05:08:50 +0000 (0:00:01.478) 0:28:35.689 ***** 2026-02-05 05:09:40.822213 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:09:40.822219 | orchestrator | 2026-02-05 05:09:40.822224 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-05 05:09:40.822229 | orchestrator | Thursday 05 February 2026 05:08:51 +0000 (0:00:01.121) 0:28:36.810 ***** 2026-02-05 05:09:40.822234 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:09:40.822239 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:09:40.822244 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:09:40.822249 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-02-05 05:09:40.822272 | orchestrator | 2026-02-05 05:09:40.822278 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-05 05:09:40.822283 | orchestrator | Thursday 05 February 2026 05:09:00 +0000 (0:00:08.134) 0:28:44.944 ***** 2026-02-05 05:09:40.822288 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:09:40.822293 | orchestrator | 2026-02-05 05:09:40.822299 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-05 05:09:40.822304 | orchestrator | Thursday 05 February 2026 05:09:01 +0000 (0:00:01.177) 0:28:46.122 ***** 2026-02-05 05:09:40.822309 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 05:09:40.822314 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 05:09:40.822320 | orchestrator | 2026-02-05 05:09:40.822325 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:09:40.822330 | orchestrator | Thursday 05 February 2026 05:09:04 +0000 (0:00:03.219) 0:28:49.342 ***** 2026-02-05 05:09:40.822335 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 05:09:40.822340 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-05 05:09:40.822346 | orchestrator | 2026-02-05 05:09:40.822351 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-05 05:09:40.822356 | orchestrator | Thursday 05 February 2026 05:09:06 +0000 (0:00:01.973) 0:28:51.315 ***** 2026-02-05 05:09:40.822361 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:09:40.822366 | orchestrator | 2026-02-05 05:09:40.822371 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-05 05:09:40.822377 | orchestrator | Thursday 05 February 2026 05:09:08 +0000 (0:00:01.596) 0:28:52.912 ***** 2026-02-05 05:09:40.822382 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:09:40.822388 | orchestrator | 2026-02-05 05:09:40.822395 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-05 05:09:40.822400 | orchestrator | Thursday 05 February 2026 05:09:08 +0000 (0:00:00.768) 0:28:53.680 ***** 2026-02-05 05:09:40.822407 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:09:40.822412 | orchestrator | 2026-02-05 05:09:40.822419 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-05 05:09:40.822435 | orchestrator | Thursday 05 February 2026 05:09:09 +0000 (0:00:00.770) 0:28:54.451 ***** 2026-02-05 05:09:40.822442 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-02-05 05:09:40.822448 | orchestrator | 2026-02-05 05:09:40.822454 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-05 05:09:40.822460 | orchestrator | Thursday 05 February 2026 05:09:10 +0000 (0:00:01.114) 0:28:55.566 ***** 2026-02-05 05:09:40.822466 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:09:40.822472 | orchestrator | 2026-02-05 05:09:40.822478 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-05 05:09:40.822484 | orchestrator | Thursday 05 February 2026 05:09:11 +0000 (0:00:01.117) 0:28:56.684 ***** 2026-02-05 05:09:40.822490 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:09:40.822496 | orchestrator | 2026-02-05 05:09:40.822502 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-05 05:09:40.822508 | orchestrator | Thursday 05 February 2026 05:09:13 +0000 (0:00:01.158) 0:28:57.842 ***** 2026-02-05 05:09:40.822514 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-02-05 05:09:40.822520 | orchestrator | 2026-02-05 05:09:40.822526 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-05 05:09:40.822532 | orchestrator | Thursday 05 February 2026 05:09:14 +0000 (0:00:01.105) 0:28:58.948 ***** 2026-02-05 05:09:40.822538 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:09:40.822545 | orchestrator | 2026-02-05 05:09:40.822554 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-05 05:09:40.822560 | orchestrator | Thursday 05 February 2026 05:09:16 +0000 (0:00:02.019) 0:29:00.968 ***** 2026-02-05 05:09:40.822572 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:09:40.822577 | orchestrator | 2026-02-05 05:09:40.822584 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-05 05:09:40.822590 | orchestrator | Thursday 05 February 2026 05:09:18 +0000 (0:00:01.941) 0:29:02.909 ***** 2026-02-05 05:09:40.822595 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:09:40.822602 | orchestrator | 2026-02-05 05:09:40.822608 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-05 05:09:40.822614 | orchestrator | Thursday 05 February 2026 05:09:20 +0000 (0:00:02.489) 0:29:05.399 ***** 2026-02-05 05:09:40.822620 | orchestrator | changed: [testbed-node-1] 2026-02-05 05:09:40.822627 | orchestrator | 2026-02-05 05:09:40.822633 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-05 05:09:40.822639 | orchestrator | Thursday 05 February 2026 05:09:24 +0000 (0:00:03.608) 0:29:09.008 ***** 2026-02-05 05:09:40.822645 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:09:40.822651 | orchestrator | 2026-02-05 05:09:40.822657 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-05 05:09:40.822664 | orchestrator | 2026-02-05 05:09:40.822669 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-05 05:09:40.822675 | orchestrator | Thursday 05 February 2026 05:09:25 +0000 (0:00:00.971) 0:29:09.980 ***** 2026-02-05 05:09:40.822681 | orchestrator | changed: [testbed-node-2] 2026-02-05 05:09:40.822687 | orchestrator | 2026-02-05 05:09:40.822693 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-05 05:09:40.822699 | orchestrator | Thursday 05 February 2026 05:09:27 +0000 (0:00:02.591) 0:29:12.572 ***** 2026-02-05 05:09:40.822706 | orchestrator | changed: [testbed-node-2] 2026-02-05 05:09:40.822712 | orchestrator | 2026-02-05 05:09:40.822718 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:09:40.822724 | orchestrator | Thursday 05 February 2026 05:09:29 +0000 (0:00:02.128) 0:29:14.700 ***** 2026-02-05 05:09:40.822730 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-05 05:09:40.822736 | orchestrator | 2026-02-05 05:09:40.822743 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 05:09:40.822749 | orchestrator | Thursday 05 February 2026 05:09:30 +0000 (0:00:01.108) 0:29:15.809 ***** 2026-02-05 05:09:40.822754 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:09:40.822759 | orchestrator | 2026-02-05 05:09:40.822764 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 05:09:40.822769 | orchestrator | Thursday 05 February 2026 05:09:32 +0000 (0:00:01.482) 0:29:17.291 ***** 2026-02-05 05:09:40.822774 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:09:40.822780 | orchestrator | 2026-02-05 05:09:40.822785 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:09:40.822790 | orchestrator | Thursday 05 February 2026 05:09:33 +0000 (0:00:01.175) 0:29:18.467 ***** 2026-02-05 05:09:40.822795 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:09:40.822800 | orchestrator | 2026-02-05 05:09:40.822805 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:09:40.822811 | orchestrator | Thursday 05 February 2026 05:09:35 +0000 (0:00:01.514) 0:29:19.981 ***** 2026-02-05 05:09:40.822816 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:09:40.822821 | orchestrator | 2026-02-05 05:09:40.822826 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 05:09:40.822831 | orchestrator | Thursday 05 February 2026 05:09:36 +0000 (0:00:01.135) 0:29:21.116 ***** 2026-02-05 05:09:40.822836 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:09:40.822841 | orchestrator | 2026-02-05 05:09:40.822846 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 05:09:40.822852 | orchestrator | Thursday 05 February 2026 05:09:37 +0000 (0:00:01.159) 0:29:22.276 ***** 2026-02-05 05:09:40.822857 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:09:40.822862 | orchestrator | 2026-02-05 05:09:40.822867 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 05:09:40.822876 | orchestrator | Thursday 05 February 2026 05:09:38 +0000 (0:00:01.125) 0:29:23.402 ***** 2026-02-05 05:09:40.822881 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:09:40.822886 | orchestrator | 2026-02-05 05:09:40.822891 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 05:09:40.822897 | orchestrator | Thursday 05 February 2026 05:09:39 +0000 (0:00:01.109) 0:29:24.511 ***** 2026-02-05 05:09:40.822902 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:09:40.822907 | orchestrator | 2026-02-05 05:09:40.822915 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 05:10:05.026544 | orchestrator | Thursday 05 February 2026 05:09:40 +0000 (0:00:01.120) 0:29:25.632 ***** 2026-02-05 05:10:05.026630 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:10:05.026644 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:10:05.026654 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 05:10:05.026664 | orchestrator | 2026-02-05 05:10:05.026671 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 05:10:05.026677 | orchestrator | Thursday 05 February 2026 05:09:42 +0000 (0:00:01.665) 0:29:27.298 ***** 2026-02-05 05:10:05.026682 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:05.026688 | orchestrator | 2026-02-05 05:10:05.026693 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 05:10:05.026699 | orchestrator | Thursday 05 February 2026 05:09:43 +0000 (0:00:01.287) 0:29:28.585 ***** 2026-02-05 05:10:05.026704 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:10:05.026709 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:10:05.026714 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 05:10:05.026719 | orchestrator | 2026-02-05 05:10:05.026738 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 05:10:05.026743 | orchestrator | Thursday 05 February 2026 05:09:46 +0000 (0:00:02.911) 0:29:31.497 ***** 2026-02-05 05:10:05.026749 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 05:10:05.026754 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 05:10:05.026759 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 05:10:05.026764 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:05.026770 | orchestrator | 2026-02-05 05:10:05.026775 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 05:10:05.026780 | orchestrator | Thursday 05 February 2026 05:09:48 +0000 (0:00:01.411) 0:29:32.908 ***** 2026-02-05 05:10:05.026786 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 05:10:05.026794 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 05:10:05.026799 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 05:10:05.026805 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:05.026810 | orchestrator | 2026-02-05 05:10:05.026815 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 05:10:05.026820 | orchestrator | Thursday 05 February 2026 05:09:49 +0000 (0:00:01.887) 0:29:34.796 ***** 2026-02-05 05:10:05.026828 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:05.026854 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:05.026864 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:05.026869 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:05.026875 | orchestrator | 2026-02-05 05:10:05.026880 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 05:10:05.026885 | orchestrator | Thursday 05 February 2026 05:09:51 +0000 (0:00:01.190) 0:29:35.986 ***** 2026-02-05 05:10:05.026904 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 05:09:44.334742', 'end': '2026-02-05 05:09:44.375027', 'delta': '0:00:00.040285', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 05:10:05.026915 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 05:09:44.939962', 'end': '2026-02-05 05:09:44.977985', 'delta': '0:00:00.038023', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 05:10:05.026920 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '9163e99c5c4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 05:09:45.494812', 'end': '2026-02-05 05:09:45.544621', 'delta': '0:00:00.049809', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9163e99c5c4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 05:10:05.026926 | orchestrator | 2026-02-05 05:10:05.026931 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 05:10:05.026936 | orchestrator | Thursday 05 February 2026 05:09:52 +0000 (0:00:01.206) 0:29:37.193 ***** 2026-02-05 05:10:05.026946 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:05.026951 | orchestrator | 2026-02-05 05:10:05.026956 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 05:10:05.026961 | orchestrator | Thursday 05 February 2026 05:09:53 +0000 (0:00:01.282) 0:29:38.476 ***** 2026-02-05 05:10:05.026966 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:05.026971 | orchestrator | 2026-02-05 05:10:05.026976 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 05:10:05.026981 | orchestrator | Thursday 05 February 2026 05:09:54 +0000 (0:00:01.219) 0:29:39.696 ***** 2026-02-05 05:10:05.026986 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:05.026991 | orchestrator | 2026-02-05 05:10:05.026996 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 05:10:05.027001 | orchestrator | Thursday 05 February 2026 05:09:56 +0000 (0:00:01.177) 0:29:40.873 ***** 2026-02-05 05:10:05.027006 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:10:05.027012 | orchestrator | 2026-02-05 05:10:05.027017 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:10:05.027021 | orchestrator | Thursday 05 February 2026 05:09:58 +0000 (0:00:02.048) 0:29:42.922 ***** 2026-02-05 05:10:05.027026 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:05.027032 | orchestrator | 2026-02-05 05:10:05.027037 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 05:10:05.027042 | orchestrator | Thursday 05 February 2026 05:09:59 +0000 (0:00:01.181) 0:29:44.103 ***** 2026-02-05 05:10:05.027047 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:05.027052 | orchestrator | 2026-02-05 05:10:05.027057 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 05:10:05.027062 | orchestrator | Thursday 05 February 2026 05:10:00 +0000 (0:00:01.071) 0:29:45.175 ***** 2026-02-05 05:10:05.027067 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:05.027072 | orchestrator | 2026-02-05 05:10:05.027077 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:10:05.027095 | orchestrator | Thursday 05 February 2026 05:10:01 +0000 (0:00:01.231) 0:29:46.407 ***** 2026-02-05 05:10:05.027100 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:05.027127 | orchestrator | 2026-02-05 05:10:05.027136 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 05:10:05.027144 | orchestrator | Thursday 05 February 2026 05:10:02 +0000 (0:00:01.157) 0:29:47.565 ***** 2026-02-05 05:10:05.027152 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:05.027160 | orchestrator | 2026-02-05 05:10:05.027168 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 05:10:05.027173 | orchestrator | Thursday 05 February 2026 05:10:03 +0000 (0:00:01.152) 0:29:48.717 ***** 2026-02-05 05:10:05.027179 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:05.027184 | orchestrator | 2026-02-05 05:10:05.027194 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 05:10:11.920528 | orchestrator | Thursday 05 February 2026 05:10:05 +0000 (0:00:01.118) 0:29:49.835 ***** 2026-02-05 05:10:11.920662 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:11.920681 | orchestrator | 2026-02-05 05:10:11.920693 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 05:10:11.920704 | orchestrator | Thursday 05 February 2026 05:10:06 +0000 (0:00:01.134) 0:29:50.970 ***** 2026-02-05 05:10:11.920715 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:11.920724 | orchestrator | 2026-02-05 05:10:11.920735 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 05:10:11.920745 | orchestrator | Thursday 05 February 2026 05:10:07 +0000 (0:00:01.109) 0:29:52.080 ***** 2026-02-05 05:10:11.920754 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:11.920764 | orchestrator | 2026-02-05 05:10:11.920774 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 05:10:11.920809 | orchestrator | Thursday 05 February 2026 05:10:08 +0000 (0:00:01.106) 0:29:53.186 ***** 2026-02-05 05:10:11.920820 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:11.920829 | orchestrator | 2026-02-05 05:10:11.920839 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 05:10:11.920862 | orchestrator | Thursday 05 February 2026 05:10:09 +0000 (0:00:01.112) 0:29:54.298 ***** 2026-02-05 05:10:11.920875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:10:11.920889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:10:11.920899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:10:11.920911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:10:11.920924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:10:11.920934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:10:11.920944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:10:11.920984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48b9971a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:10:11.921005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:10:11.921015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:10:11.921025 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:11.921035 | orchestrator | 2026-02-05 05:10:11.921045 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 05:10:11.921055 | orchestrator | Thursday 05 February 2026 05:10:10 +0000 (0:00:01.247) 0:29:55.546 ***** 2026-02-05 05:10:11.921066 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:11.921086 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:19.642510 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:19.642629 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:19.642643 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:19.642651 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:19.642657 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:19.642686 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48b9971a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1', 'scsi-SQEMU_QEMU_HARDDISK_48b9971a-a594-48d0-a5ef-0421396a811f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:19.642715 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:19.642722 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:10:19.642728 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:19.642736 | orchestrator | 2026-02-05 05:10:19.642744 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 05:10:19.642751 | orchestrator | Thursday 05 February 2026 05:10:11 +0000 (0:00:01.189) 0:29:56.736 ***** 2026-02-05 05:10:19.642757 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:19.642764 | orchestrator | 2026-02-05 05:10:19.642770 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 05:10:19.642776 | orchestrator | Thursday 05 February 2026 05:10:13 +0000 (0:00:01.543) 0:29:58.279 ***** 2026-02-05 05:10:19.642782 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:19.642787 | orchestrator | 2026-02-05 05:10:19.642793 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:10:19.642798 | orchestrator | Thursday 05 February 2026 05:10:14 +0000 (0:00:01.139) 0:29:59.419 ***** 2026-02-05 05:10:19.642809 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:19.642815 | orchestrator | 2026-02-05 05:10:19.642820 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:10:19.642826 | orchestrator | Thursday 05 February 2026 05:10:16 +0000 (0:00:01.508) 0:30:00.928 ***** 2026-02-05 05:10:19.642832 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:19.642837 | orchestrator | 2026-02-05 05:10:19.642843 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:10:19.642849 | orchestrator | Thursday 05 February 2026 05:10:17 +0000 (0:00:01.111) 0:30:02.039 ***** 2026-02-05 05:10:19.642854 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:19.642860 | orchestrator | 2026-02-05 05:10:19.642866 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:10:19.642872 | orchestrator | Thursday 05 February 2026 05:10:18 +0000 (0:00:01.229) 0:30:03.269 ***** 2026-02-05 05:10:19.642878 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:19.642884 | orchestrator | 2026-02-05 05:10:19.642890 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 05:10:19.642902 | orchestrator | Thursday 05 February 2026 05:10:19 +0000 (0:00:01.183) 0:30:04.452 ***** 2026-02-05 05:10:55.845945 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-05 05:10:55.846077 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-05 05:10:55.846090 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 05:10:55.846097 | orchestrator | 2026-02-05 05:10:55.846105 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 05:10:55.846113 | orchestrator | Thursday 05 February 2026 05:10:21 +0000 (0:00:01.666) 0:30:06.118 ***** 2026-02-05 05:10:55.846147 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 05:10:55.846159 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 05:10:55.846170 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 05:10:55.846182 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846192 | orchestrator | 2026-02-05 05:10:55.846217 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 05:10:55.846224 | orchestrator | Thursday 05 February 2026 05:10:22 +0000 (0:00:01.154) 0:30:07.273 ***** 2026-02-05 05:10:55.846230 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846237 | orchestrator | 2026-02-05 05:10:55.846243 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 05:10:55.846250 | orchestrator | Thursday 05 February 2026 05:10:23 +0000 (0:00:01.119) 0:30:08.393 ***** 2026-02-05 05:10:55.846256 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:10:55.846263 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:10:55.846269 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 05:10:55.846276 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:10:55.846282 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:10:55.846288 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:10:55.846294 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:10:55.846301 | orchestrator | 2026-02-05 05:10:55.846307 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 05:10:55.846313 | orchestrator | Thursday 05 February 2026 05:10:25 +0000 (0:00:02.032) 0:30:10.425 ***** 2026-02-05 05:10:55.846320 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:10:55.846326 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:10:55.846332 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 05:10:55.846355 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:10:55.846362 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:10:55.846368 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:10:55.846374 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:10:55.846380 | orchestrator | 2026-02-05 05:10:55.846386 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:10:55.846393 | orchestrator | Thursday 05 February 2026 05:10:27 +0000 (0:00:02.168) 0:30:12.594 ***** 2026-02-05 05:10:55.846399 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-05 05:10:55.846406 | orchestrator | 2026-02-05 05:10:55.846412 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 05:10:55.846419 | orchestrator | Thursday 05 February 2026 05:10:28 +0000 (0:00:01.166) 0:30:13.761 ***** 2026-02-05 05:10:55.846425 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-05 05:10:55.846431 | orchestrator | 2026-02-05 05:10:55.846437 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 05:10:55.846443 | orchestrator | Thursday 05 February 2026 05:10:30 +0000 (0:00:01.104) 0:30:14.865 ***** 2026-02-05 05:10:55.846450 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:55.846456 | orchestrator | 2026-02-05 05:10:55.846463 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 05:10:55.846469 | orchestrator | Thursday 05 February 2026 05:10:31 +0000 (0:00:01.501) 0:30:16.367 ***** 2026-02-05 05:10:55.846475 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846482 | orchestrator | 2026-02-05 05:10:55.846488 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 05:10:55.846494 | orchestrator | Thursday 05 February 2026 05:10:32 +0000 (0:00:01.146) 0:30:17.513 ***** 2026-02-05 05:10:55.846500 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846507 | orchestrator | 2026-02-05 05:10:55.846514 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 05:10:55.846524 | orchestrator | Thursday 05 February 2026 05:10:33 +0000 (0:00:01.188) 0:30:18.701 ***** 2026-02-05 05:10:55.846535 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846546 | orchestrator | 2026-02-05 05:10:55.846556 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 05:10:55.846566 | orchestrator | Thursday 05 February 2026 05:10:35 +0000 (0:00:01.129) 0:30:19.831 ***** 2026-02-05 05:10:55.846576 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:55.846587 | orchestrator | 2026-02-05 05:10:55.846597 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 05:10:55.846608 | orchestrator | Thursday 05 February 2026 05:10:36 +0000 (0:00:01.524) 0:30:21.356 ***** 2026-02-05 05:10:55.846620 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846630 | orchestrator | 2026-02-05 05:10:55.846641 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 05:10:55.846664 | orchestrator | Thursday 05 February 2026 05:10:37 +0000 (0:00:01.113) 0:30:22.469 ***** 2026-02-05 05:10:55.846671 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846679 | orchestrator | 2026-02-05 05:10:55.846686 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 05:10:55.846693 | orchestrator | Thursday 05 February 2026 05:10:38 +0000 (0:00:01.106) 0:30:23.576 ***** 2026-02-05 05:10:55.846700 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:55.846708 | orchestrator | 2026-02-05 05:10:55.846715 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 05:10:55.846722 | orchestrator | Thursday 05 February 2026 05:10:40 +0000 (0:00:01.537) 0:30:25.114 ***** 2026-02-05 05:10:55.846729 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:55.846736 | orchestrator | 2026-02-05 05:10:55.846750 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 05:10:55.846761 | orchestrator | Thursday 05 February 2026 05:10:41 +0000 (0:00:01.522) 0:30:26.636 ***** 2026-02-05 05:10:55.846769 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846776 | orchestrator | 2026-02-05 05:10:55.846783 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:10:55.846791 | orchestrator | Thursday 05 February 2026 05:10:42 +0000 (0:00:00.747) 0:30:27.384 ***** 2026-02-05 05:10:55.846798 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:55.846805 | orchestrator | 2026-02-05 05:10:55.846812 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:10:55.846819 | orchestrator | Thursday 05 February 2026 05:10:43 +0000 (0:00:00.826) 0:30:28.210 ***** 2026-02-05 05:10:55.846826 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846835 | orchestrator | 2026-02-05 05:10:55.846845 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:10:55.846856 | orchestrator | Thursday 05 February 2026 05:10:44 +0000 (0:00:00.783) 0:30:28.994 ***** 2026-02-05 05:10:55.846867 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846877 | orchestrator | 2026-02-05 05:10:55.846887 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:10:55.846895 | orchestrator | Thursday 05 February 2026 05:10:44 +0000 (0:00:00.763) 0:30:29.757 ***** 2026-02-05 05:10:55.846904 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846913 | orchestrator | 2026-02-05 05:10:55.846922 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:10:55.846931 | orchestrator | Thursday 05 February 2026 05:10:45 +0000 (0:00:00.838) 0:30:30.596 ***** 2026-02-05 05:10:55.846940 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846949 | orchestrator | 2026-02-05 05:10:55.846959 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:10:55.846968 | orchestrator | Thursday 05 February 2026 05:10:46 +0000 (0:00:00.762) 0:30:31.359 ***** 2026-02-05 05:10:55.846978 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.846987 | orchestrator | 2026-02-05 05:10:55.846997 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:10:55.847006 | orchestrator | Thursday 05 February 2026 05:10:47 +0000 (0:00:00.758) 0:30:32.117 ***** 2026-02-05 05:10:55.847016 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:55.847025 | orchestrator | 2026-02-05 05:10:55.847035 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:10:55.847045 | orchestrator | Thursday 05 February 2026 05:10:48 +0000 (0:00:00.786) 0:30:32.904 ***** 2026-02-05 05:10:55.847055 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:55.847064 | orchestrator | 2026-02-05 05:10:55.847074 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:10:55.847084 | orchestrator | Thursday 05 February 2026 05:10:48 +0000 (0:00:00.771) 0:30:33.675 ***** 2026-02-05 05:10:55.847095 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:10:55.847106 | orchestrator | 2026-02-05 05:10:55.847115 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:10:55.847161 | orchestrator | Thursday 05 February 2026 05:10:49 +0000 (0:00:00.802) 0:30:34.478 ***** 2026-02-05 05:10:55.847168 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.847174 | orchestrator | 2026-02-05 05:10:55.847180 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:10:55.847186 | orchestrator | Thursday 05 February 2026 05:10:50 +0000 (0:00:00.752) 0:30:35.230 ***** 2026-02-05 05:10:55.847193 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.847199 | orchestrator | 2026-02-05 05:10:55.847205 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:10:55.847212 | orchestrator | Thursday 05 February 2026 05:10:51 +0000 (0:00:00.787) 0:30:36.018 ***** 2026-02-05 05:10:55.847218 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.847224 | orchestrator | 2026-02-05 05:10:55.847237 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:10:55.847244 | orchestrator | Thursday 05 February 2026 05:10:51 +0000 (0:00:00.759) 0:30:36.777 ***** 2026-02-05 05:10:55.847250 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.847256 | orchestrator | 2026-02-05 05:10:55.847263 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:10:55.847269 | orchestrator | Thursday 05 February 2026 05:10:52 +0000 (0:00:00.763) 0:30:37.540 ***** 2026-02-05 05:10:55.847275 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.847281 | orchestrator | 2026-02-05 05:10:55.847288 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:10:55.847294 | orchestrator | Thursday 05 February 2026 05:10:53 +0000 (0:00:00.784) 0:30:38.325 ***** 2026-02-05 05:10:55.847300 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.847306 | orchestrator | 2026-02-05 05:10:55.847312 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:10:55.847319 | orchestrator | Thursday 05 February 2026 05:10:54 +0000 (0:00:00.771) 0:30:39.097 ***** 2026-02-05 05:10:55.847325 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:10:55.847331 | orchestrator | 2026-02-05 05:10:55.847337 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:10:55.847344 | orchestrator | Thursday 05 February 2026 05:10:55 +0000 (0:00:00.762) 0:30:39.859 ***** 2026-02-05 05:10:55.847358 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.285984 | orchestrator | 2026-02-05 05:11:43.286197 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:11:43.286217 | orchestrator | Thursday 05 February 2026 05:10:55 +0000 (0:00:00.796) 0:30:40.656 ***** 2026-02-05 05:11:43.286228 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.286239 | orchestrator | 2026-02-05 05:11:43.286249 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:11:43.286259 | orchestrator | Thursday 05 February 2026 05:10:56 +0000 (0:00:00.748) 0:30:41.405 ***** 2026-02-05 05:11:43.286269 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.286279 | orchestrator | 2026-02-05 05:11:43.286289 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:11:43.286299 | orchestrator | Thursday 05 February 2026 05:10:57 +0000 (0:00:00.762) 0:30:42.167 ***** 2026-02-05 05:11:43.286324 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.286335 | orchestrator | 2026-02-05 05:11:43.286345 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:11:43.286355 | orchestrator | Thursday 05 February 2026 05:10:58 +0000 (0:00:00.783) 0:30:42.951 ***** 2026-02-05 05:11:43.286365 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.286374 | orchestrator | 2026-02-05 05:11:43.286384 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:11:43.286393 | orchestrator | Thursday 05 February 2026 05:10:58 +0000 (0:00:00.766) 0:30:43.718 ***** 2026-02-05 05:11:43.286403 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:11:43.286414 | orchestrator | 2026-02-05 05:11:43.286423 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:11:43.286433 | orchestrator | Thursday 05 February 2026 05:11:00 +0000 (0:00:01.648) 0:30:45.367 ***** 2026-02-05 05:11:43.286442 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:11:43.286452 | orchestrator | 2026-02-05 05:11:43.286461 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:11:43.286471 | orchestrator | Thursday 05 February 2026 05:11:02 +0000 (0:00:02.059) 0:30:47.428 ***** 2026-02-05 05:11:43.286480 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-05 05:11:43.286491 | orchestrator | 2026-02-05 05:11:43.286500 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 05:11:43.286510 | orchestrator | Thursday 05 February 2026 05:11:03 +0000 (0:00:01.301) 0:30:48.729 ***** 2026-02-05 05:11:43.286540 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.286553 | orchestrator | 2026-02-05 05:11:43.286564 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 05:11:43.286576 | orchestrator | Thursday 05 February 2026 05:11:05 +0000 (0:00:01.113) 0:30:49.843 ***** 2026-02-05 05:11:43.286587 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.286598 | orchestrator | 2026-02-05 05:11:43.286609 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 05:11:43.286620 | orchestrator | Thursday 05 February 2026 05:11:06 +0000 (0:00:01.132) 0:30:50.975 ***** 2026-02-05 05:11:43.286631 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 05:11:43.286642 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 05:11:43.286653 | orchestrator | 2026-02-05 05:11:43.286665 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 05:11:43.286676 | orchestrator | Thursday 05 February 2026 05:11:08 +0000 (0:00:01.856) 0:30:52.832 ***** 2026-02-05 05:11:43.286687 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:11:43.286698 | orchestrator | 2026-02-05 05:11:43.286710 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 05:11:43.286721 | orchestrator | Thursday 05 February 2026 05:11:09 +0000 (0:00:01.434) 0:30:54.267 ***** 2026-02-05 05:11:43.286732 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.286743 | orchestrator | 2026-02-05 05:11:43.286754 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 05:11:43.286765 | orchestrator | Thursday 05 February 2026 05:11:10 +0000 (0:00:01.118) 0:30:55.385 ***** 2026-02-05 05:11:43.286776 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.286787 | orchestrator | 2026-02-05 05:11:43.286798 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:11:43.286809 | orchestrator | Thursday 05 February 2026 05:11:11 +0000 (0:00:00.764) 0:30:56.149 ***** 2026-02-05 05:11:43.286821 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.286832 | orchestrator | 2026-02-05 05:11:43.286843 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:11:43.286854 | orchestrator | Thursday 05 February 2026 05:11:12 +0000 (0:00:00.760) 0:30:56.910 ***** 2026-02-05 05:11:43.286866 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-05 05:11:43.286877 | orchestrator | 2026-02-05 05:11:43.286887 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 05:11:43.286899 | orchestrator | Thursday 05 February 2026 05:11:13 +0000 (0:00:01.107) 0:30:58.017 ***** 2026-02-05 05:11:43.286910 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:11:43.286921 | orchestrator | 2026-02-05 05:11:43.286933 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 05:11:43.286944 | orchestrator | Thursday 05 February 2026 05:11:14 +0000 (0:00:01.727) 0:30:59.745 ***** 2026-02-05 05:11:43.286954 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 05:11:43.286964 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 05:11:43.286973 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 05:11:43.286982 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.286992 | orchestrator | 2026-02-05 05:11:43.287001 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 05:11:43.287011 | orchestrator | Thursday 05 February 2026 05:11:16 +0000 (0:00:01.115) 0:31:00.861 ***** 2026-02-05 05:11:43.287036 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287046 | orchestrator | 2026-02-05 05:11:43.287056 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 05:11:43.287065 | orchestrator | Thursday 05 February 2026 05:11:17 +0000 (0:00:01.117) 0:31:01.978 ***** 2026-02-05 05:11:43.287075 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287091 | orchestrator | 2026-02-05 05:11:43.287101 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 05:11:43.287110 | orchestrator | Thursday 05 February 2026 05:11:18 +0000 (0:00:01.172) 0:31:03.151 ***** 2026-02-05 05:11:43.287119 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287154 | orchestrator | 2026-02-05 05:11:43.287164 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 05:11:43.287173 | orchestrator | Thursday 05 February 2026 05:11:19 +0000 (0:00:01.139) 0:31:04.291 ***** 2026-02-05 05:11:43.287188 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287198 | orchestrator | 2026-02-05 05:11:43.287208 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 05:11:43.287218 | orchestrator | Thursday 05 February 2026 05:11:20 +0000 (0:00:01.236) 0:31:05.528 ***** 2026-02-05 05:11:43.287227 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287236 | orchestrator | 2026-02-05 05:11:43.287246 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:11:43.287255 | orchestrator | Thursday 05 February 2026 05:11:21 +0000 (0:00:00.772) 0:31:06.300 ***** 2026-02-05 05:11:43.287265 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:11:43.287274 | orchestrator | 2026-02-05 05:11:43.287284 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:11:43.287294 | orchestrator | Thursday 05 February 2026 05:11:23 +0000 (0:00:02.195) 0:31:08.496 ***** 2026-02-05 05:11:43.287303 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:11:43.287313 | orchestrator | 2026-02-05 05:11:43.287322 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:11:43.287332 | orchestrator | Thursday 05 February 2026 05:11:24 +0000 (0:00:00.762) 0:31:09.258 ***** 2026-02-05 05:11:43.287341 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-05 05:11:43.287351 | orchestrator | 2026-02-05 05:11:43.287361 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 05:11:43.287378 | orchestrator | Thursday 05 February 2026 05:11:25 +0000 (0:00:01.090) 0:31:10.349 ***** 2026-02-05 05:11:43.287418 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287436 | orchestrator | 2026-02-05 05:11:43.287453 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 05:11:43.287469 | orchestrator | Thursday 05 February 2026 05:11:26 +0000 (0:00:01.117) 0:31:11.466 ***** 2026-02-05 05:11:43.287482 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287491 | orchestrator | 2026-02-05 05:11:43.287501 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 05:11:43.287511 | orchestrator | Thursday 05 February 2026 05:11:27 +0000 (0:00:01.138) 0:31:12.605 ***** 2026-02-05 05:11:43.287520 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287530 | orchestrator | 2026-02-05 05:11:43.287539 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 05:11:43.287549 | orchestrator | Thursday 05 February 2026 05:11:28 +0000 (0:00:01.137) 0:31:13.742 ***** 2026-02-05 05:11:43.287558 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287568 | orchestrator | 2026-02-05 05:11:43.287577 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 05:11:43.287587 | orchestrator | Thursday 05 February 2026 05:11:30 +0000 (0:00:01.104) 0:31:14.847 ***** 2026-02-05 05:11:43.287597 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287606 | orchestrator | 2026-02-05 05:11:43.287616 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 05:11:43.287625 | orchestrator | Thursday 05 February 2026 05:11:31 +0000 (0:00:01.160) 0:31:16.007 ***** 2026-02-05 05:11:43.287635 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287644 | orchestrator | 2026-02-05 05:11:43.287654 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 05:11:43.287663 | orchestrator | Thursday 05 February 2026 05:11:32 +0000 (0:00:01.133) 0:31:17.142 ***** 2026-02-05 05:11:43.287680 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287690 | orchestrator | 2026-02-05 05:11:43.287700 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 05:11:43.287709 | orchestrator | Thursday 05 February 2026 05:11:33 +0000 (0:00:01.176) 0:31:18.318 ***** 2026-02-05 05:11:43.287719 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:11:43.287728 | orchestrator | 2026-02-05 05:11:43.287738 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 05:11:43.287747 | orchestrator | Thursday 05 February 2026 05:11:34 +0000 (0:00:01.121) 0:31:19.439 ***** 2026-02-05 05:11:43.287757 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:11:43.287766 | orchestrator | 2026-02-05 05:11:43.287776 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:11:43.287785 | orchestrator | Thursday 05 February 2026 05:11:35 +0000 (0:00:00.781) 0:31:20.221 ***** 2026-02-05 05:11:43.287795 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-05 05:11:43.287804 | orchestrator | 2026-02-05 05:11:43.287814 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 05:11:43.287824 | orchestrator | Thursday 05 February 2026 05:11:36 +0000 (0:00:01.098) 0:31:21.320 ***** 2026-02-05 05:11:43.287833 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-05 05:11:43.287844 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-05 05:11:43.287854 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-05 05:11:43.287863 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-05 05:11:43.287873 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-05 05:11:43.287883 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-05 05:11:43.287900 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-05 05:12:18.890631 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-05 05:12:18.890732 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 05:12:18.890747 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 05:12:18.890757 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 05:12:18.890766 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 05:12:18.890775 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 05:12:18.890784 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 05:12:18.890793 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-05 05:12:18.890819 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-05 05:12:18.890827 | orchestrator | 2026-02-05 05:12:18.890837 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:12:18.890845 | orchestrator | Thursday 05 February 2026 05:11:43 +0000 (0:00:06.764) 0:31:28.085 ***** 2026-02-05 05:12:18.890853 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.890861 | orchestrator | 2026-02-05 05:12:18.890868 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:12:18.890876 | orchestrator | Thursday 05 February 2026 05:11:44 +0000 (0:00:00.770) 0:31:28.856 ***** 2026-02-05 05:12:18.890883 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.890892 | orchestrator | 2026-02-05 05:12:18.890899 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:12:18.890906 | orchestrator | Thursday 05 February 2026 05:11:44 +0000 (0:00:00.763) 0:31:29.619 ***** 2026-02-05 05:12:18.890914 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.890922 | orchestrator | 2026-02-05 05:12:18.890929 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:12:18.890937 | orchestrator | Thursday 05 February 2026 05:11:45 +0000 (0:00:00.783) 0:31:30.403 ***** 2026-02-05 05:12:18.890944 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.890973 | orchestrator | 2026-02-05 05:12:18.890981 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:12:18.890989 | orchestrator | Thursday 05 February 2026 05:11:46 +0000 (0:00:00.779) 0:31:31.183 ***** 2026-02-05 05:12:18.890996 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891003 | orchestrator | 2026-02-05 05:12:18.891011 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:12:18.891018 | orchestrator | Thursday 05 February 2026 05:11:47 +0000 (0:00:00.760) 0:31:31.943 ***** 2026-02-05 05:12:18.891026 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891034 | orchestrator | 2026-02-05 05:12:18.891042 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:12:18.891051 | orchestrator | Thursday 05 February 2026 05:11:47 +0000 (0:00:00.755) 0:31:32.699 ***** 2026-02-05 05:12:18.891059 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891066 | orchestrator | 2026-02-05 05:12:18.891075 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:12:18.891083 | orchestrator | Thursday 05 February 2026 05:11:48 +0000 (0:00:00.744) 0:31:33.444 ***** 2026-02-05 05:12:18.891090 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891099 | orchestrator | 2026-02-05 05:12:18.891107 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:12:18.891114 | orchestrator | Thursday 05 February 2026 05:11:49 +0000 (0:00:00.798) 0:31:34.242 ***** 2026-02-05 05:12:18.891122 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891178 | orchestrator | 2026-02-05 05:12:18.891188 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:12:18.891196 | orchestrator | Thursday 05 February 2026 05:11:50 +0000 (0:00:00.791) 0:31:35.034 ***** 2026-02-05 05:12:18.891203 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891212 | orchestrator | 2026-02-05 05:12:18.891220 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:12:18.891228 | orchestrator | Thursday 05 February 2026 05:11:50 +0000 (0:00:00.772) 0:31:35.806 ***** 2026-02-05 05:12:18.891235 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891242 | orchestrator | 2026-02-05 05:12:18.891251 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:12:18.891259 | orchestrator | Thursday 05 February 2026 05:11:51 +0000 (0:00:00.764) 0:31:36.570 ***** 2026-02-05 05:12:18.891267 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891275 | orchestrator | 2026-02-05 05:12:18.891284 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:12:18.891292 | orchestrator | Thursday 05 February 2026 05:11:52 +0000 (0:00:00.748) 0:31:37.319 ***** 2026-02-05 05:12:18.891301 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891309 | orchestrator | 2026-02-05 05:12:18.891319 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:12:18.891327 | orchestrator | Thursday 05 February 2026 05:11:53 +0000 (0:00:00.867) 0:31:38.187 ***** 2026-02-05 05:12:18.891335 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891343 | orchestrator | 2026-02-05 05:12:18.891351 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:12:18.891359 | orchestrator | Thursday 05 February 2026 05:11:54 +0000 (0:00:00.755) 0:31:38.942 ***** 2026-02-05 05:12:18.891369 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891378 | orchestrator | 2026-02-05 05:12:18.891387 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:12:18.891397 | orchestrator | Thursday 05 February 2026 05:11:54 +0000 (0:00:00.862) 0:31:39.805 ***** 2026-02-05 05:12:18.891406 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891415 | orchestrator | 2026-02-05 05:12:18.891424 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:12:18.891432 | orchestrator | Thursday 05 February 2026 05:11:55 +0000 (0:00:00.774) 0:31:40.579 ***** 2026-02-05 05:12:18.891472 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891483 | orchestrator | 2026-02-05 05:12:18.891492 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:12:18.891501 | orchestrator | Thursday 05 February 2026 05:11:56 +0000 (0:00:00.765) 0:31:41.345 ***** 2026-02-05 05:12:18.891509 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891517 | orchestrator | 2026-02-05 05:12:18.891525 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:12:18.891533 | orchestrator | Thursday 05 February 2026 05:11:57 +0000 (0:00:00.768) 0:31:42.115 ***** 2026-02-05 05:12:18.891542 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891551 | orchestrator | 2026-02-05 05:12:18.891560 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:12:18.891568 | orchestrator | Thursday 05 February 2026 05:11:58 +0000 (0:00:00.769) 0:31:42.885 ***** 2026-02-05 05:12:18.891577 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891586 | orchestrator | 2026-02-05 05:12:18.891594 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:12:18.891603 | orchestrator | Thursday 05 February 2026 05:11:58 +0000 (0:00:00.773) 0:31:43.659 ***** 2026-02-05 05:12:18.891611 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891619 | orchestrator | 2026-02-05 05:12:18.891627 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:12:18.891635 | orchestrator | Thursday 05 February 2026 05:11:59 +0000 (0:00:00.763) 0:31:44.422 ***** 2026-02-05 05:12:18.891644 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-05 05:12:18.891653 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-05 05:12:18.891661 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-05 05:12:18.891669 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891678 | orchestrator | 2026-02-05 05:12:18.891686 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:12:18.891694 | orchestrator | Thursday 05 February 2026 05:12:00 +0000 (0:00:01.038) 0:31:45.461 ***** 2026-02-05 05:12:18.891703 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-05 05:12:18.891711 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-05 05:12:18.891719 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-05 05:12:18.891726 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891734 | orchestrator | 2026-02-05 05:12:18.891743 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:12:18.891751 | orchestrator | Thursday 05 February 2026 05:12:01 +0000 (0:00:01.037) 0:31:46.499 ***** 2026-02-05 05:12:18.891758 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-05 05:12:18.891766 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-05 05:12:18.891774 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-05 05:12:18.891782 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891789 | orchestrator | 2026-02-05 05:12:18.891797 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:12:18.891805 | orchestrator | Thursday 05 February 2026 05:12:02 +0000 (0:00:01.072) 0:31:47.572 ***** 2026-02-05 05:12:18.891813 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891822 | orchestrator | 2026-02-05 05:12:18.891830 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:12:18.891839 | orchestrator | Thursday 05 February 2026 05:12:03 +0000 (0:00:00.793) 0:31:48.365 ***** 2026-02-05 05:12:18.891847 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-05 05:12:18.891855 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.891863 | orchestrator | 2026-02-05 05:12:18.891871 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:12:18.891880 | orchestrator | Thursday 05 February 2026 05:12:04 +0000 (0:00:00.904) 0:31:49.270 ***** 2026-02-05 05:12:18.891896 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:12:18.891904 | orchestrator | 2026-02-05 05:12:18.891912 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-05 05:12:18.891920 | orchestrator | Thursday 05 February 2026 05:12:05 +0000 (0:00:01.432) 0:31:50.702 ***** 2026-02-05 05:12:18.891929 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:12:18.891937 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:12:18.891945 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 05:12:18.891954 | orchestrator | 2026-02-05 05:12:18.891963 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-05 05:12:18.891971 | orchestrator | Thursday 05 February 2026 05:12:07 +0000 (0:00:01.583) 0:31:52.286 ***** 2026-02-05 05:12:18.891980 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-02-05 05:12:18.891988 | orchestrator | 2026-02-05 05:12:18.891997 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-05 05:12:18.892006 | orchestrator | Thursday 05 February 2026 05:12:08 +0000 (0:00:01.087) 0:31:53.373 ***** 2026-02-05 05:12:18.892015 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:12:18.892024 | orchestrator | 2026-02-05 05:12:18.892032 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-05 05:12:18.892040 | orchestrator | Thursday 05 February 2026 05:12:10 +0000 (0:00:01.478) 0:31:54.852 ***** 2026-02-05 05:12:18.892047 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:12:18.892056 | orchestrator | 2026-02-05 05:12:18.892064 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-05 05:12:18.892072 | orchestrator | Thursday 05 February 2026 05:12:11 +0000 (0:00:01.137) 0:31:55.990 ***** 2026-02-05 05:12:18.892081 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:12:18.892088 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:12:18.892108 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:13:05.585207 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-02-05 05:13:05.585344 | orchestrator | 2026-02-05 05:13:05.585502 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-05 05:13:05.585527 | orchestrator | Thursday 05 February 2026 05:12:18 +0000 (0:00:07.706) 0:32:03.696 ***** 2026-02-05 05:13:05.585543 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:13:05.585559 | orchestrator | 2026-02-05 05:13:05.585574 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-05 05:13:05.585589 | orchestrator | Thursday 05 February 2026 05:12:20 +0000 (0:00:01.143) 0:32:04.840 ***** 2026-02-05 05:13:05.585605 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 05:13:05.585621 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-05 05:13:05.585635 | orchestrator | 2026-02-05 05:13:05.585657 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:13:05.585673 | orchestrator | Thursday 05 February 2026 05:12:23 +0000 (0:00:03.346) 0:32:08.187 ***** 2026-02-05 05:13:05.585689 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 05:13:05.585706 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-05 05:13:05.585722 | orchestrator | 2026-02-05 05:13:05.585736 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-05 05:13:05.585750 | orchestrator | Thursday 05 February 2026 05:12:25 +0000 (0:00:02.065) 0:32:10.253 ***** 2026-02-05 05:13:05.585764 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:13:05.585777 | orchestrator | 2026-02-05 05:13:05.585791 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-05 05:13:05.585806 | orchestrator | Thursday 05 February 2026 05:12:26 +0000 (0:00:01.480) 0:32:11.733 ***** 2026-02-05 05:13:05.585819 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:13:05.585855 | orchestrator | 2026-02-05 05:13:05.585870 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-05 05:13:05.585883 | orchestrator | Thursday 05 February 2026 05:12:27 +0000 (0:00:00.779) 0:32:12.513 ***** 2026-02-05 05:13:05.585897 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:13:05.585911 | orchestrator | 2026-02-05 05:13:05.585925 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-05 05:13:05.585938 | orchestrator | Thursday 05 February 2026 05:12:28 +0000 (0:00:00.791) 0:32:13.304 ***** 2026-02-05 05:13:05.585952 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-02-05 05:13:05.585966 | orchestrator | 2026-02-05 05:13:05.585979 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-05 05:13:05.585993 | orchestrator | Thursday 05 February 2026 05:12:29 +0000 (0:00:01.104) 0:32:14.408 ***** 2026-02-05 05:13:05.586007 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:13:05.586110 | orchestrator | 2026-02-05 05:13:05.586125 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-05 05:13:05.586155 | orchestrator | Thursday 05 February 2026 05:12:30 +0000 (0:00:01.139) 0:32:15.549 ***** 2026-02-05 05:13:05.586169 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:13:05.586183 | orchestrator | 2026-02-05 05:13:05.586196 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-05 05:13:05.586210 | orchestrator | Thursday 05 February 2026 05:12:31 +0000 (0:00:01.115) 0:32:16.664 ***** 2026-02-05 05:13:05.586224 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-02-05 05:13:05.586237 | orchestrator | 2026-02-05 05:13:05.586251 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-05 05:13:05.586264 | orchestrator | Thursday 05 February 2026 05:12:33 +0000 (0:00:01.190) 0:32:17.855 ***** 2026-02-05 05:13:05.586278 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:13:05.586291 | orchestrator | 2026-02-05 05:13:05.586304 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-05 05:13:05.586318 | orchestrator | Thursday 05 February 2026 05:12:35 +0000 (0:00:02.087) 0:32:19.942 ***** 2026-02-05 05:13:05.586332 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:13:05.586345 | orchestrator | 2026-02-05 05:13:05.586358 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-05 05:13:05.586372 | orchestrator | Thursday 05 February 2026 05:12:37 +0000 (0:00:01.956) 0:32:21.899 ***** 2026-02-05 05:13:05.586384 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:13:05.586398 | orchestrator | 2026-02-05 05:13:05.586412 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-05 05:13:05.586426 | orchestrator | Thursday 05 February 2026 05:12:39 +0000 (0:00:02.491) 0:32:24.391 ***** 2026-02-05 05:13:05.586439 | orchestrator | changed: [testbed-node-2] 2026-02-05 05:13:05.586452 | orchestrator | 2026-02-05 05:13:05.586465 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-05 05:13:05.586479 | orchestrator | Thursday 05 February 2026 05:12:43 +0000 (0:00:03.477) 0:32:27.869 ***** 2026-02-05 05:13:05.586492 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-05 05:13:05.586506 | orchestrator | 2026-02-05 05:13:05.586519 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-05 05:13:05.586532 | orchestrator | Thursday 05 February 2026 05:12:44 +0000 (0:00:01.491) 0:32:29.361 ***** 2026-02-05 05:13:05.586545 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:13:05.586558 | orchestrator | 2026-02-05 05:13:05.586572 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-05 05:13:05.586585 | orchestrator | Thursday 05 February 2026 05:12:47 +0000 (0:00:02.493) 0:32:31.855 ***** 2026-02-05 05:13:05.586598 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:13:05.586611 | orchestrator | 2026-02-05 05:13:05.586625 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-05 05:13:05.586650 | orchestrator | Thursday 05 February 2026 05:12:49 +0000 (0:00:02.404) 0:32:34.260 ***** 2026-02-05 05:13:05.586663 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:13:05.586676 | orchestrator | 2026-02-05 05:13:05.586688 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-05 05:13:05.586725 | orchestrator | Thursday 05 February 2026 05:12:50 +0000 (0:00:01.342) 0:32:35.603 ***** 2026-02-05 05:13:05.586739 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:13:05.586753 | orchestrator | 2026-02-05 05:13:05.586766 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-05 05:13:05.586779 | orchestrator | Thursday 05 February 2026 05:12:51 +0000 (0:00:01.122) 0:32:36.725 ***** 2026-02-05 05:13:05.586792 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-05 05:13:05.586806 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-05 05:13:05.586820 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:13:05.586833 | orchestrator | 2026-02-05 05:13:05.586846 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-05 05:13:05.586865 | orchestrator | Thursday 05 February 2026 05:12:53 +0000 (0:00:01.621) 0:32:38.347 ***** 2026-02-05 05:13:05.586878 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-05 05:13:05.586891 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-05 05:13:05.586904 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-05 05:13:05.586918 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-05 05:13:05.586932 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:13:05.586945 | orchestrator | 2026-02-05 05:13:05.586958 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-02-05 05:13:05.586971 | orchestrator | 2026-02-05 05:13:05.586984 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:13:05.586998 | orchestrator | Thursday 05 February 2026 05:12:55 +0000 (0:00:02.352) 0:32:40.699 ***** 2026-02-05 05:13:05.587011 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:13:05.587025 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:13:05.587039 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:13:05.587052 | orchestrator | 2026-02-05 05:13:05.587065 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:13:05.587078 | orchestrator | Thursday 05 February 2026 05:12:57 +0000 (0:00:01.604) 0:32:42.304 ***** 2026-02-05 05:13:05.587091 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:13:05.587104 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:13:05.587118 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:13:05.587151 | orchestrator | 2026-02-05 05:13:05.587165 | orchestrator | TASK [Get pool list] *********************************************************** 2026-02-05 05:13:05.587178 | orchestrator | Thursday 05 February 2026 05:12:59 +0000 (0:00:01.569) 0:32:43.873 ***** 2026-02-05 05:13:05.587191 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:13:05.587204 | orchestrator | 2026-02-05 05:13:05.587218 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-02-05 05:13:05.587232 | orchestrator | Thursday 05 February 2026 05:13:01 +0000 (0:00:02.911) 0:32:46.785 ***** 2026-02-05 05:13:05.587245 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:13:05.587258 | orchestrator | 2026-02-05 05:13:05.587271 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-02-05 05:13:05.587284 | orchestrator | Thursday 05 February 2026 05:13:05 +0000 (0:00:03.061) 0:32:49.847 ***** 2026-02-05 05:13:05.587303 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-02-05T02:37:17.912968+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:13:05.587348 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-02-05T02:38:25.907323+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:13:05.991878 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-02-05T02:38:29.337226+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '80', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:13:05.992013 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-02-05T02:39:26.801688+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '72', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '66', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:13:05.992027 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-02-05T02:39:32.392821+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '72', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '66', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:13:05.992046 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-02-05T02:39:38.592598+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '72', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '68', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:13:05.992061 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-02-05T02:39:44.793903+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '191', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '68', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:13:07.736717 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-02-05T02:39:51.017251+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '72', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '70', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:13:07.736789 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-02-05T02:40:03.322454+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '72', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '70', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:13:07.736839 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-02-05T02:40:44.115380+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '87', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 87, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:13:07.736852 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-02-05T02:40:52.849924+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '94', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 94, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:13:07.736878 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-02-05T02:41:03.069805+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '201', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 201, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:14:45.305295 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-02-05T02:41:12.632186+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '113', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 113, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:14:45.305491 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-02-05T02:41:21.469885+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '119', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 119, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-05 05:14:45.305515 | orchestrator | 2026-02-05 05:14:45.305528 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-02-05 05:14:45.305572 | orchestrator | Thursday 05 February 2026 05:13:07 +0000 (0:00:02.702) 0:32:52.549 ***** 2026-02-05 05:14:45.305607 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:14:45.305625 | orchestrator | 2026-02-05 05:14:45.305645 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-02-05 05:14:45.305664 | orchestrator | Thursday 05 February 2026 05:13:10 +0000 (0:00:02.852) 0:32:55.402 ***** 2026-02-05 05:14:45.305684 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-05 05:14:45.305704 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-05 05:14:45.305723 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-05 05:14:45.305744 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-05 05:14:45.305764 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-05 05:14:45.305784 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-05 05:14:45.305803 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-05 05:14:45.305822 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-05 05:14:45.305842 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-05 05:14:45.305862 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-05 05:14:45.305881 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-05 05:14:45.305898 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-05 05:14:45.305913 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-05 05:14:45.305932 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-05 05:14:45.305950 | orchestrator | 2026-02-05 05:14:45.305969 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-02-05 05:14:45.305988 | orchestrator | Thursday 05 February 2026 05:14:27 +0000 (0:01:16.798) 0:34:12.200 ***** 2026-02-05 05:14:45.306007 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-05 05:14:45.306106 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-05 05:14:45.306118 | orchestrator | 2026-02-05 05:14:45.306130 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-05 05:14:45.306172 | orchestrator | 2026-02-05 05:14:45.306192 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:14:45.306210 | orchestrator | Thursday 05 February 2026 05:14:34 +0000 (0:00:07.402) 0:34:19.602 ***** 2026-02-05 05:14:45.306229 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-05 05:14:45.306247 | orchestrator | 2026-02-05 05:14:45.306267 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 05:14:45.306286 | orchestrator | Thursday 05 February 2026 05:14:35 +0000 (0:00:01.117) 0:34:20.720 ***** 2026-02-05 05:14:45.306306 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:14:45.306325 | orchestrator | 2026-02-05 05:14:45.306355 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 05:14:45.306375 | orchestrator | Thursday 05 February 2026 05:14:37 +0000 (0:00:01.415) 0:34:22.136 ***** 2026-02-05 05:14:45.306393 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:14:45.306410 | orchestrator | 2026-02-05 05:14:45.306427 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:14:45.306444 | orchestrator | Thursday 05 February 2026 05:14:38 +0000 (0:00:01.103) 0:34:23.239 ***** 2026-02-05 05:14:45.306473 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:14:45.306490 | orchestrator | 2026-02-05 05:14:45.306506 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:14:45.306523 | orchestrator | Thursday 05 February 2026 05:14:39 +0000 (0:00:01.404) 0:34:24.643 ***** 2026-02-05 05:14:45.306541 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:14:45.306559 | orchestrator | 2026-02-05 05:14:45.306578 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 05:14:45.306595 | orchestrator | Thursday 05 February 2026 05:14:40 +0000 (0:00:01.106) 0:34:25.750 ***** 2026-02-05 05:14:45.306611 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:14:45.306628 | orchestrator | 2026-02-05 05:14:45.306645 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 05:14:45.306663 | orchestrator | Thursday 05 February 2026 05:14:42 +0000 (0:00:01.087) 0:34:26.837 ***** 2026-02-05 05:14:45.306679 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:14:45.306697 | orchestrator | 2026-02-05 05:14:45.306713 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 05:14:45.306730 | orchestrator | Thursday 05 February 2026 05:14:43 +0000 (0:00:01.108) 0:34:27.946 ***** 2026-02-05 05:14:45.306746 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:14:45.306765 | orchestrator | 2026-02-05 05:14:45.306782 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 05:14:45.306801 | orchestrator | Thursday 05 February 2026 05:14:44 +0000 (0:00:01.074) 0:34:29.020 ***** 2026-02-05 05:14:45.306820 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:14:45.306838 | orchestrator | 2026-02-05 05:14:45.306873 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 05:15:07.984441 | orchestrator | Thursday 05 February 2026 05:14:45 +0000 (0:00:01.096) 0:34:30.117 ***** 2026-02-05 05:15:07.984530 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:15:07.984541 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:15:07.984549 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:15:07.984556 | orchestrator | 2026-02-05 05:15:07.984565 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 05:15:07.984572 | orchestrator | Thursday 05 February 2026 05:14:46 +0000 (0:00:01.615) 0:34:31.732 ***** 2026-02-05 05:15:07.984579 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:07.984588 | orchestrator | 2026-02-05 05:15:07.984596 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 05:15:07.984603 | orchestrator | Thursday 05 February 2026 05:14:48 +0000 (0:00:01.245) 0:34:32.977 ***** 2026-02-05 05:15:07.984610 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:15:07.984617 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:15:07.984624 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:15:07.984633 | orchestrator | 2026-02-05 05:15:07.984640 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 05:15:07.984647 | orchestrator | Thursday 05 February 2026 05:14:51 +0000 (0:00:03.160) 0:34:36.137 ***** 2026-02-05 05:15:07.984655 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 05:15:07.984663 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 05:15:07.984670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 05:15:07.984677 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:07.984684 | orchestrator | 2026-02-05 05:15:07.984692 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 05:15:07.984699 | orchestrator | Thursday 05 February 2026 05:14:52 +0000 (0:00:01.362) 0:34:37.500 ***** 2026-02-05 05:15:07.984707 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 05:15:07.984737 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 05:15:07.984746 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 05:15:07.984753 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:07.984760 | orchestrator | 2026-02-05 05:15:07.984767 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 05:15:07.984775 | orchestrator | Thursday 05 February 2026 05:14:54 +0000 (0:00:01.750) 0:34:39.250 ***** 2026-02-05 05:15:07.984796 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:07.984807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:07.984814 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:07.984822 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:07.984829 | orchestrator | 2026-02-05 05:15:07.984837 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 05:15:07.984844 | orchestrator | Thursday 05 February 2026 05:14:55 +0000 (0:00:01.107) 0:34:40.357 ***** 2026-02-05 05:15:07.984867 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 05:14:48.695987', 'end': '2026-02-05 05:14:48.743822', 'delta': '0:00:00.047835', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 05:15:07.984877 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 05:14:49.244166', 'end': '2026-02-05 05:14:49.291948', 'delta': '0:00:00.047782', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 05:15:07.984891 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9163e99c5c4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 05:14:50.109949', 'end': '2026-02-05 05:14:50.157967', 'delta': '0:00:00.048018', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9163e99c5c4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 05:15:07.984899 | orchestrator | 2026-02-05 05:15:07.984906 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 05:15:07.984913 | orchestrator | Thursday 05 February 2026 05:14:56 +0000 (0:00:01.115) 0:34:41.473 ***** 2026-02-05 05:15:07.984921 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:07.984928 | orchestrator | 2026-02-05 05:15:07.984935 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 05:15:07.984942 | orchestrator | Thursday 05 February 2026 05:14:57 +0000 (0:00:01.165) 0:34:42.639 ***** 2026-02-05 05:15:07.984950 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:07.984957 | orchestrator | 2026-02-05 05:15:07.984964 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 05:15:07.984972 | orchestrator | Thursday 05 February 2026 05:14:59 +0000 (0:00:01.375) 0:34:44.015 ***** 2026-02-05 05:15:07.984983 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:07.984990 | orchestrator | 2026-02-05 05:15:07.984998 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 05:15:07.985005 | orchestrator | Thursday 05 February 2026 05:15:00 +0000 (0:00:01.098) 0:34:45.113 ***** 2026-02-05 05:15:07.985012 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:15:07.985020 | orchestrator | 2026-02-05 05:15:07.985027 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:15:07.985034 | orchestrator | Thursday 05 February 2026 05:15:02 +0000 (0:00:01.901) 0:34:47.015 ***** 2026-02-05 05:15:07.985042 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:07.985049 | orchestrator | 2026-02-05 05:15:07.985056 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 05:15:07.985063 | orchestrator | Thursday 05 February 2026 05:15:03 +0000 (0:00:01.145) 0:34:48.160 ***** 2026-02-05 05:15:07.985071 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:07.985078 | orchestrator | 2026-02-05 05:15:07.985085 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 05:15:07.985093 | orchestrator | Thursday 05 February 2026 05:15:04 +0000 (0:00:00.930) 0:34:49.090 ***** 2026-02-05 05:15:07.985100 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:07.985107 | orchestrator | 2026-02-05 05:15:07.985115 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:15:07.985122 | orchestrator | Thursday 05 February 2026 05:15:05 +0000 (0:00:00.967) 0:34:50.058 ***** 2026-02-05 05:15:07.985129 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:07.985173 | orchestrator | 2026-02-05 05:15:07.985186 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 05:15:07.985197 | orchestrator | Thursday 05 February 2026 05:15:06 +0000 (0:00:00.899) 0:34:50.958 ***** 2026-02-05 05:15:07.985209 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:07.985222 | orchestrator | 2026-02-05 05:15:07.985233 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 05:15:07.985246 | orchestrator | Thursday 05 February 2026 05:15:07 +0000 (0:00:00.897) 0:34:51.855 ***** 2026-02-05 05:15:07.985266 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:11.977090 | orchestrator | 2026-02-05 05:15:11.977253 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 05:15:11.977272 | orchestrator | Thursday 05 February 2026 05:15:07 +0000 (0:00:00.935) 0:34:52.790 ***** 2026-02-05 05:15:11.977284 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:11.977296 | orchestrator | 2026-02-05 05:15:11.977307 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 05:15:11.977319 | orchestrator | Thursday 05 February 2026 05:15:08 +0000 (0:00:00.883) 0:34:53.673 ***** 2026-02-05 05:15:11.977330 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:11.977343 | orchestrator | 2026-02-05 05:15:11.977354 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 05:15:11.977364 | orchestrator | Thursday 05 February 2026 05:15:09 +0000 (0:00:00.911) 0:34:54.585 ***** 2026-02-05 05:15:11.977375 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:11.977386 | orchestrator | 2026-02-05 05:15:11.977397 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 05:15:11.977409 | orchestrator | Thursday 05 February 2026 05:15:10 +0000 (0:00:00.928) 0:34:55.514 ***** 2026-02-05 05:15:11.977420 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:11.977431 | orchestrator | 2026-02-05 05:15:11.977443 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 05:15:11.977454 | orchestrator | Thursday 05 February 2026 05:15:11 +0000 (0:00:01.091) 0:34:56.606 ***** 2026-02-05 05:15:11.977467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:15:11.977482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567', 'dm-uuid-LVM-rm93nYJXJvDmNv1mI2i0aCOQRWUNQlkCoPPr3WLpbHMBKwrxigfqk31Pio1T8A2M'], 'uuids': ['7cbe1ae0-472e-4015-9248-1616ea071c47'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M']}})  2026-02-05 05:15:11.977516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff', 'scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41a73991', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:15:11.977529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-VPbbSc-FYsx-oCa5-EK96-LSd2-FMne-gw3pzp', 'scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2', 'scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e']}})  2026-02-05 05:15:11.977566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:15:11.977597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:15:11.977610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:15:11.977625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:15:11.977637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV', 'dm-uuid-CRYPT-LUKS2-24caf7b252c344f2a02a18860df8d987-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:15:11.977651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:15:11.977669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e', 'dm-uuid-LVM-gjVz64L0xYhHucIQrbSWO4IaXeskE9njVHEBOKPFjChmvGixI0fMAnchfE228jrV'], 'uuids': ['24caf7b2-52c3-44f2-a02a-18860df8d987'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV']}})  2026-02-05 05:15:11.977684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-30TRfy-AcTU-PjNY-ZSvI-Ms8S-pTLw-T1Q2CW', 'scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b', 'scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567']}})  2026-02-05 05:15:11.977714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:15:13.239606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5fa98ac', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:15:13.239727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:15:13.239747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:15:13.239780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M', 'dm-uuid-CRYPT-LUKS2-7cbe1ae0472e401592481616ea071c47-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:15:13.239794 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:13.239806 | orchestrator | 2026-02-05 05:15:13.239819 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 05:15:13.239830 | orchestrator | Thursday 05 February 2026 05:15:13 +0000 (0:00:01.277) 0:34:57.883 ***** 2026-02-05 05:15:13.239862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:13.239876 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567', 'dm-uuid-LVM-rm93nYJXJvDmNv1mI2i0aCOQRWUNQlkCoPPr3WLpbHMBKwrxigfqk31Pio1T8A2M'], 'uuids': ['7cbe1ae0-472e-4015-9248-1616ea071c47'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:13.239890 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff', 'scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41a73991', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:13.239909 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-VPbbSc-FYsx-oCa5-EK96-LSd2-FMne-gw3pzp', 'scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2', 'scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:13.239931 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:13.239951 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:14.437047 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:14.437157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:14.437167 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV', 'dm-uuid-CRYPT-LUKS2-24caf7b252c344f2a02a18860df8d987-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:14.437190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:14.437215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e', 'dm-uuid-LVM-gjVz64L0xYhHucIQrbSWO4IaXeskE9njVHEBOKPFjChmvGixI0fMAnchfE228jrV'], 'uuids': ['24caf7b2-52c3-44f2-a02a-18860df8d987'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:14.437238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-30TRfy-AcTU-PjNY-ZSvI-Ms8S-pTLw-T1Q2CW', 'scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b', 'scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:14.437248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:14.437259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5fa98ac', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:14.437272 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:14.437283 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:52.209730 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M', 'dm-uuid-CRYPT-LUKS2-7cbe1ae0472e401592481616ea071c47-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:15:52.209869 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:52.209894 | orchestrator | 2026-02-05 05:15:52.209911 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 05:15:52.209929 | orchestrator | Thursday 05 February 2026 05:15:14 +0000 (0:00:01.363) 0:34:59.247 ***** 2026-02-05 05:15:52.209945 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:52.209963 | orchestrator | 2026-02-05 05:15:52.209979 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 05:15:52.209996 | orchestrator | Thursday 05 February 2026 05:15:15 +0000 (0:00:01.511) 0:35:00.758 ***** 2026-02-05 05:15:52.210085 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:52.210109 | orchestrator | 2026-02-05 05:15:52.210127 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:15:52.210204 | orchestrator | Thursday 05 February 2026 05:15:17 +0000 (0:00:01.129) 0:35:01.888 ***** 2026-02-05 05:15:52.210223 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:52.210241 | orchestrator | 2026-02-05 05:15:52.210258 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:15:52.210276 | orchestrator | Thursday 05 February 2026 05:15:18 +0000 (0:00:01.473) 0:35:03.361 ***** 2026-02-05 05:15:52.210294 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:52.210312 | orchestrator | 2026-02-05 05:15:52.210328 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:15:52.210368 | orchestrator | Thursday 05 February 2026 05:15:19 +0000 (0:00:01.111) 0:35:04.472 ***** 2026-02-05 05:15:52.210420 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:52.210440 | orchestrator | 2026-02-05 05:15:52.210458 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:15:52.210475 | orchestrator | Thursday 05 February 2026 05:15:20 +0000 (0:00:01.220) 0:35:05.693 ***** 2026-02-05 05:15:52.210493 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:52.210511 | orchestrator | 2026-02-05 05:15:52.210527 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 05:15:52.210544 | orchestrator | Thursday 05 February 2026 05:15:22 +0000 (0:00:01.130) 0:35:06.824 ***** 2026-02-05 05:15:52.210562 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-05 05:15:52.210581 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-05 05:15:52.210598 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-05 05:15:52.210616 | orchestrator | 2026-02-05 05:15:52.210634 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 05:15:52.210652 | orchestrator | Thursday 05 February 2026 05:15:23 +0000 (0:00:01.946) 0:35:08.770 ***** 2026-02-05 05:15:52.210669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 05:15:52.210688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 05:15:52.210706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 05:15:52.210722 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:52.210738 | orchestrator | 2026-02-05 05:15:52.210754 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 05:15:52.210770 | orchestrator | Thursday 05 February 2026 05:15:25 +0000 (0:00:01.140) 0:35:09.911 ***** 2026-02-05 05:15:52.210785 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-05 05:15:52.210802 | orchestrator | 2026-02-05 05:15:52.210817 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:15:52.210835 | orchestrator | Thursday 05 February 2026 05:15:26 +0000 (0:00:01.123) 0:35:11.034 ***** 2026-02-05 05:15:52.210851 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:52.210866 | orchestrator | 2026-02-05 05:15:52.210880 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:15:52.210895 | orchestrator | Thursday 05 February 2026 05:15:27 +0000 (0:00:01.131) 0:35:12.166 ***** 2026-02-05 05:15:52.210911 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:52.210928 | orchestrator | 2026-02-05 05:15:52.210944 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:15:52.210959 | orchestrator | Thursday 05 February 2026 05:15:28 +0000 (0:00:01.144) 0:35:13.311 ***** 2026-02-05 05:15:52.210974 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:52.210990 | orchestrator | 2026-02-05 05:15:52.211005 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:15:52.211022 | orchestrator | Thursday 05 February 2026 05:15:29 +0000 (0:00:01.144) 0:35:14.455 ***** 2026-02-05 05:15:52.211037 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:52.211055 | orchestrator | 2026-02-05 05:15:52.211072 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:15:52.211105 | orchestrator | Thursday 05 February 2026 05:15:30 +0000 (0:00:01.283) 0:35:15.739 ***** 2026-02-05 05:15:52.211123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:15:52.211198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:15:52.211216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:15:52.211232 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:52.211248 | orchestrator | 2026-02-05 05:15:52.211265 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:15:52.211279 | orchestrator | Thursday 05 February 2026 05:15:32 +0000 (0:00:01.397) 0:35:17.136 ***** 2026-02-05 05:15:52.211296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:15:52.211312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:15:52.211328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:15:52.211345 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:52.211361 | orchestrator | 2026-02-05 05:15:52.211378 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:15:52.211394 | orchestrator | Thursday 05 February 2026 05:15:33 +0000 (0:00:01.522) 0:35:18.659 ***** 2026-02-05 05:15:52.211407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:15:52.211417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:15:52.211426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:15:52.211436 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:15:52.211445 | orchestrator | 2026-02-05 05:15:52.211455 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:15:52.211464 | orchestrator | Thursday 05 February 2026 05:15:35 +0000 (0:00:01.442) 0:35:20.101 ***** 2026-02-05 05:15:52.211474 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:52.211484 | orchestrator | 2026-02-05 05:15:52.211494 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:15:52.211503 | orchestrator | Thursday 05 February 2026 05:15:36 +0000 (0:00:01.153) 0:35:21.254 ***** 2026-02-05 05:15:52.211513 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 05:15:52.211522 | orchestrator | 2026-02-05 05:15:52.211532 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 05:15:52.211541 | orchestrator | Thursday 05 February 2026 05:15:37 +0000 (0:00:01.342) 0:35:22.597 ***** 2026-02-05 05:15:52.211551 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:15:52.211561 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:15:52.211570 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:15:52.211590 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 05:15:52.211601 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:15:52.211611 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:15:52.211620 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:15:52.211630 | orchestrator | 2026-02-05 05:15:52.211640 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 05:15:52.211649 | orchestrator | Thursday 05 February 2026 05:15:39 +0000 (0:00:02.091) 0:35:24.688 ***** 2026-02-05 05:15:52.211659 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:15:52.211668 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:15:52.211678 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:15:52.211687 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 05:15:52.211697 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:15:52.211716 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:15:52.211725 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:15:52.211735 | orchestrator | 2026-02-05 05:15:52.211744 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-05 05:15:52.211754 | orchestrator | Thursday 05 February 2026 05:15:42 +0000 (0:00:02.553) 0:35:27.241 ***** 2026-02-05 05:15:52.211763 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:52.211773 | orchestrator | 2026-02-05 05:15:52.211782 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-05 05:15:52.211792 | orchestrator | Thursday 05 February 2026 05:15:43 +0000 (0:00:01.566) 0:35:28.807 ***** 2026-02-05 05:15:52.211802 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:52.211811 | orchestrator | 2026-02-05 05:15:52.211821 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-05 05:15:52.211830 | orchestrator | Thursday 05 February 2026 05:15:45 +0000 (0:00:01.108) 0:35:29.916 ***** 2026-02-05 05:15:52.211840 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:15:52.211849 | orchestrator | 2026-02-05 05:15:52.211859 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-05 05:15:52.211868 | orchestrator | Thursday 05 February 2026 05:15:46 +0000 (0:00:01.564) 0:35:31.480 ***** 2026-02-05 05:15:52.211878 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-02-05 05:15:52.211890 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-05 05:15:52.211904 | orchestrator | 2026-02-05 05:15:52.211920 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:15:52.211930 | orchestrator | Thursday 05 February 2026 05:15:51 +0000 (0:00:04.404) 0:35:35.885 ***** 2026-02-05 05:15:52.211940 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-05 05:15:52.211953 | orchestrator | 2026-02-05 05:15:52.211969 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 05:15:52.211988 | orchestrator | Thursday 05 February 2026 05:15:52 +0000 (0:00:01.132) 0:35:37.018 ***** 2026-02-05 05:16:42.505501 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-05 05:16:42.505624 | orchestrator | 2026-02-05 05:16:42.505646 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 05:16:42.505664 | orchestrator | Thursday 05 February 2026 05:15:53 +0000 (0:00:01.171) 0:35:38.190 ***** 2026-02-05 05:16:42.505680 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.505694 | orchestrator | 2026-02-05 05:16:42.505708 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 05:16:42.505720 | orchestrator | Thursday 05 February 2026 05:15:54 +0000 (0:00:01.116) 0:35:39.306 ***** 2026-02-05 05:16:42.505734 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.505748 | orchestrator | 2026-02-05 05:16:42.505761 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 05:16:42.505774 | orchestrator | Thursday 05 February 2026 05:15:55 +0000 (0:00:01.514) 0:35:40.821 ***** 2026-02-05 05:16:42.505787 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.505799 | orchestrator | 2026-02-05 05:16:42.505813 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 05:16:42.505827 | orchestrator | Thursday 05 February 2026 05:15:57 +0000 (0:00:01.527) 0:35:42.349 ***** 2026-02-05 05:16:42.505840 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.505852 | orchestrator | 2026-02-05 05:16:42.505865 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 05:16:42.505877 | orchestrator | Thursday 05 February 2026 05:15:59 +0000 (0:00:01.520) 0:35:43.869 ***** 2026-02-05 05:16:42.505892 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.505905 | orchestrator | 2026-02-05 05:16:42.505919 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 05:16:42.505961 | orchestrator | Thursday 05 February 2026 05:16:00 +0000 (0:00:01.089) 0:35:44.959 ***** 2026-02-05 05:16:42.505977 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.505991 | orchestrator | 2026-02-05 05:16:42.506006 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 05:16:42.506091 | orchestrator | Thursday 05 February 2026 05:16:01 +0000 (0:00:01.122) 0:35:46.082 ***** 2026-02-05 05:16:42.506111 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.506132 | orchestrator | 2026-02-05 05:16:42.506172 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 05:16:42.506186 | orchestrator | Thursday 05 February 2026 05:16:02 +0000 (0:00:01.111) 0:35:47.193 ***** 2026-02-05 05:16:42.506200 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.506213 | orchestrator | 2026-02-05 05:16:42.506233 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 05:16:42.506271 | orchestrator | Thursday 05 February 2026 05:16:04 +0000 (0:00:01.679) 0:35:48.873 ***** 2026-02-05 05:16:42.506287 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.506300 | orchestrator | 2026-02-05 05:16:42.506315 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 05:16:42.506329 | orchestrator | Thursday 05 February 2026 05:16:05 +0000 (0:00:01.519) 0:35:50.392 ***** 2026-02-05 05:16:42.506342 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.506356 | orchestrator | 2026-02-05 05:16:42.506368 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:16:42.506381 | orchestrator | Thursday 05 February 2026 05:16:06 +0000 (0:00:01.150) 0:35:51.543 ***** 2026-02-05 05:16:42.506394 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.506408 | orchestrator | 2026-02-05 05:16:42.506423 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:16:42.506438 | orchestrator | Thursday 05 February 2026 05:16:07 +0000 (0:00:01.105) 0:35:52.648 ***** 2026-02-05 05:16:42.506453 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.506469 | orchestrator | 2026-02-05 05:16:42.506484 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:16:42.506498 | orchestrator | Thursday 05 February 2026 05:16:08 +0000 (0:00:01.119) 0:35:53.768 ***** 2026-02-05 05:16:42.506513 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.506527 | orchestrator | 2026-02-05 05:16:42.506540 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:16:42.506552 | orchestrator | Thursday 05 February 2026 05:16:10 +0000 (0:00:01.122) 0:35:54.891 ***** 2026-02-05 05:16:42.506565 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.506578 | orchestrator | 2026-02-05 05:16:42.506590 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:16:42.506601 | orchestrator | Thursday 05 February 2026 05:16:11 +0000 (0:00:01.106) 0:35:55.997 ***** 2026-02-05 05:16:42.506612 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.506623 | orchestrator | 2026-02-05 05:16:42.506634 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:16:42.506644 | orchestrator | Thursday 05 February 2026 05:16:12 +0000 (0:00:01.105) 0:35:57.103 ***** 2026-02-05 05:16:42.506655 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.506666 | orchestrator | 2026-02-05 05:16:42.506677 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:16:42.506689 | orchestrator | Thursday 05 February 2026 05:16:13 +0000 (0:00:01.132) 0:35:58.236 ***** 2026-02-05 05:16:42.506701 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.506714 | orchestrator | 2026-02-05 05:16:42.506726 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:16:42.506739 | orchestrator | Thursday 05 February 2026 05:16:14 +0000 (0:00:01.118) 0:35:59.354 ***** 2026-02-05 05:16:42.506751 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.506776 | orchestrator | 2026-02-05 05:16:42.506789 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:16:42.506814 | orchestrator | Thursday 05 February 2026 05:16:15 +0000 (0:00:01.154) 0:36:00.508 ***** 2026-02-05 05:16:42.506827 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.506840 | orchestrator | 2026-02-05 05:16:42.506852 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:16:42.506863 | orchestrator | Thursday 05 February 2026 05:16:16 +0000 (0:00:01.129) 0:36:01.638 ***** 2026-02-05 05:16:42.506875 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.506885 | orchestrator | 2026-02-05 05:16:42.506917 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:16:42.506930 | orchestrator | Thursday 05 February 2026 05:16:17 +0000 (0:00:01.126) 0:36:02.765 ***** 2026-02-05 05:16:42.506942 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.506952 | orchestrator | 2026-02-05 05:16:42.506962 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:16:42.506974 | orchestrator | Thursday 05 February 2026 05:16:19 +0000 (0:00:01.104) 0:36:03.869 ***** 2026-02-05 05:16:42.506985 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.506995 | orchestrator | 2026-02-05 05:16:42.507006 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:16:42.507016 | orchestrator | Thursday 05 February 2026 05:16:20 +0000 (0:00:01.110) 0:36:04.980 ***** 2026-02-05 05:16:42.507026 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.507037 | orchestrator | 2026-02-05 05:16:42.507049 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:16:42.507059 | orchestrator | Thursday 05 February 2026 05:16:21 +0000 (0:00:01.171) 0:36:06.151 ***** 2026-02-05 05:16:42.507069 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.507079 | orchestrator | 2026-02-05 05:16:42.507089 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:16:42.507099 | orchestrator | Thursday 05 February 2026 05:16:22 +0000 (0:00:01.145) 0:36:07.297 ***** 2026-02-05 05:16:42.507110 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.507120 | orchestrator | 2026-02-05 05:16:42.507130 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:16:42.507141 | orchestrator | Thursday 05 February 2026 05:16:23 +0000 (0:00:01.133) 0:36:08.430 ***** 2026-02-05 05:16:42.507175 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.507186 | orchestrator | 2026-02-05 05:16:42.507197 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:16:42.507209 | orchestrator | Thursday 05 February 2026 05:16:24 +0000 (0:00:01.163) 0:36:09.594 ***** 2026-02-05 05:16:42.507220 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.507230 | orchestrator | 2026-02-05 05:16:42.507242 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:16:42.507254 | orchestrator | Thursday 05 February 2026 05:16:25 +0000 (0:00:01.124) 0:36:10.718 ***** 2026-02-05 05:16:42.507265 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.507276 | orchestrator | 2026-02-05 05:16:42.507287 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:16:42.507300 | orchestrator | Thursday 05 February 2026 05:16:27 +0000 (0:00:01.129) 0:36:11.847 ***** 2026-02-05 05:16:42.507322 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.507334 | orchestrator | 2026-02-05 05:16:42.507345 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:16:42.507357 | orchestrator | Thursday 05 February 2026 05:16:28 +0000 (0:00:01.115) 0:36:12.963 ***** 2026-02-05 05:16:42.507369 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.507380 | orchestrator | 2026-02-05 05:16:42.507392 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:16:42.507403 | orchestrator | Thursday 05 February 2026 05:16:29 +0000 (0:00:01.101) 0:36:14.065 ***** 2026-02-05 05:16:42.507415 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.507426 | orchestrator | 2026-02-05 05:16:42.507438 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:16:42.507462 | orchestrator | Thursday 05 February 2026 05:16:30 +0000 (0:00:01.106) 0:36:15.171 ***** 2026-02-05 05:16:42.507473 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.507485 | orchestrator | 2026-02-05 05:16:42.507496 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:16:42.507507 | orchestrator | Thursday 05 February 2026 05:16:32 +0000 (0:00:01.945) 0:36:17.117 ***** 2026-02-05 05:16:42.507518 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.507530 | orchestrator | 2026-02-05 05:16:42.507541 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:16:42.507552 | orchestrator | Thursday 05 February 2026 05:16:34 +0000 (0:00:02.337) 0:36:19.455 ***** 2026-02-05 05:16:42.507564 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-05 05:16:42.507576 | orchestrator | 2026-02-05 05:16:42.507588 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 05:16:42.507599 | orchestrator | Thursday 05 February 2026 05:16:35 +0000 (0:00:01.150) 0:36:20.605 ***** 2026-02-05 05:16:42.507610 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.507623 | orchestrator | 2026-02-05 05:16:42.507634 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 05:16:42.507645 | orchestrator | Thursday 05 February 2026 05:16:36 +0000 (0:00:01.121) 0:36:21.727 ***** 2026-02-05 05:16:42.507656 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.507668 | orchestrator | 2026-02-05 05:16:42.507679 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 05:16:42.507690 | orchestrator | Thursday 05 February 2026 05:16:38 +0000 (0:00:01.133) 0:36:22.861 ***** 2026-02-05 05:16:42.507701 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 05:16:42.507713 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 05:16:42.507724 | orchestrator | 2026-02-05 05:16:42.507735 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 05:16:42.507744 | orchestrator | Thursday 05 February 2026 05:16:39 +0000 (0:00:01.816) 0:36:24.677 ***** 2026-02-05 05:16:42.507755 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:16:42.507766 | orchestrator | 2026-02-05 05:16:42.507777 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 05:16:42.507788 | orchestrator | Thursday 05 February 2026 05:16:41 +0000 (0:00:01.501) 0:36:26.178 ***** 2026-02-05 05:16:42.507800 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:16:42.507811 | orchestrator | 2026-02-05 05:16:42.507823 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 05:16:42.507848 | orchestrator | Thursday 05 February 2026 05:16:42 +0000 (0:00:01.135) 0:36:27.314 ***** 2026-02-05 05:17:29.240888 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241061 | orchestrator | 2026-02-05 05:17:29.241075 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:17:29.241084 | orchestrator | Thursday 05 February 2026 05:16:43 +0000 (0:00:01.124) 0:36:28.439 ***** 2026-02-05 05:17:29.241091 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241098 | orchestrator | 2026-02-05 05:17:29.241105 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:17:29.241112 | orchestrator | Thursday 05 February 2026 05:16:44 +0000 (0:00:01.095) 0:36:29.534 ***** 2026-02-05 05:17:29.241119 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-05 05:17:29.241127 | orchestrator | 2026-02-05 05:17:29.241133 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 05:17:29.241140 | orchestrator | Thursday 05 February 2026 05:16:45 +0000 (0:00:01.128) 0:36:30.663 ***** 2026-02-05 05:17:29.241146 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:17:29.241179 | orchestrator | 2026-02-05 05:17:29.241188 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 05:17:29.241223 | orchestrator | Thursday 05 February 2026 05:16:47 +0000 (0:00:01.709) 0:36:32.373 ***** 2026-02-05 05:17:29.241230 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 05:17:29.241237 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 05:17:29.241243 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 05:17:29.241249 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241255 | orchestrator | 2026-02-05 05:17:29.241262 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 05:17:29.241268 | orchestrator | Thursday 05 February 2026 05:16:48 +0000 (0:00:01.116) 0:36:33.490 ***** 2026-02-05 05:17:29.241283 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241290 | orchestrator | 2026-02-05 05:17:29.241303 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 05:17:29.241309 | orchestrator | Thursday 05 February 2026 05:16:49 +0000 (0:00:01.151) 0:36:34.641 ***** 2026-02-05 05:17:29.241316 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241322 | orchestrator | 2026-02-05 05:17:29.241328 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 05:17:29.241334 | orchestrator | Thursday 05 February 2026 05:16:50 +0000 (0:00:01.171) 0:36:35.812 ***** 2026-02-05 05:17:29.241357 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241364 | orchestrator | 2026-02-05 05:17:29.241370 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 05:17:29.241377 | orchestrator | Thursday 05 February 2026 05:16:52 +0000 (0:00:01.119) 0:36:36.932 ***** 2026-02-05 05:17:29.241383 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241389 | orchestrator | 2026-02-05 05:17:29.241395 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 05:17:29.241401 | orchestrator | Thursday 05 February 2026 05:16:53 +0000 (0:00:01.175) 0:36:38.108 ***** 2026-02-05 05:17:29.241408 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241415 | orchestrator | 2026-02-05 05:17:29.241422 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:17:29.241429 | orchestrator | Thursday 05 February 2026 05:16:54 +0000 (0:00:01.123) 0:36:39.231 ***** 2026-02-05 05:17:29.241436 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:17:29.241443 | orchestrator | 2026-02-05 05:17:29.241451 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:17:29.241458 | orchestrator | Thursday 05 February 2026 05:16:56 +0000 (0:00:02.543) 0:36:41.775 ***** 2026-02-05 05:17:29.241465 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:17:29.241472 | orchestrator | 2026-02-05 05:17:29.241479 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:17:29.241486 | orchestrator | Thursday 05 February 2026 05:16:58 +0000 (0:00:01.136) 0:36:42.912 ***** 2026-02-05 05:17:29.241495 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-05 05:17:29.241506 | orchestrator | 2026-02-05 05:17:29.241517 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 05:17:29.241527 | orchestrator | Thursday 05 February 2026 05:16:59 +0000 (0:00:01.124) 0:36:44.036 ***** 2026-02-05 05:17:29.241538 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241548 | orchestrator | 2026-02-05 05:17:29.241559 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 05:17:29.241567 | orchestrator | Thursday 05 February 2026 05:17:00 +0000 (0:00:01.108) 0:36:45.144 ***** 2026-02-05 05:17:29.241577 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241586 | orchestrator | 2026-02-05 05:17:29.241595 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 05:17:29.241605 | orchestrator | Thursday 05 February 2026 05:17:01 +0000 (0:00:01.136) 0:36:46.281 ***** 2026-02-05 05:17:29.241615 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241633 | orchestrator | 2026-02-05 05:17:29.241644 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 05:17:29.241654 | orchestrator | Thursday 05 February 2026 05:17:02 +0000 (0:00:01.156) 0:36:47.437 ***** 2026-02-05 05:17:29.241665 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241677 | orchestrator | 2026-02-05 05:17:29.241688 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 05:17:29.241695 | orchestrator | Thursday 05 February 2026 05:17:03 +0000 (0:00:01.157) 0:36:48.595 ***** 2026-02-05 05:17:29.241703 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241710 | orchestrator | 2026-02-05 05:17:29.241717 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 05:17:29.241724 | orchestrator | Thursday 05 February 2026 05:17:04 +0000 (0:00:01.147) 0:36:49.743 ***** 2026-02-05 05:17:29.241731 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241738 | orchestrator | 2026-02-05 05:17:29.241764 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 05:17:29.241772 | orchestrator | Thursday 05 February 2026 05:17:06 +0000 (0:00:01.132) 0:36:50.876 ***** 2026-02-05 05:17:29.241778 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241784 | orchestrator | 2026-02-05 05:17:29.241791 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 05:17:29.241797 | orchestrator | Thursday 05 February 2026 05:17:07 +0000 (0:00:01.111) 0:36:51.988 ***** 2026-02-05 05:17:29.241803 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.241809 | orchestrator | 2026-02-05 05:17:29.241815 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 05:17:29.241821 | orchestrator | Thursday 05 February 2026 05:17:08 +0000 (0:00:01.111) 0:36:53.099 ***** 2026-02-05 05:17:29.241828 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:17:29.241834 | orchestrator | 2026-02-05 05:17:29.241840 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:17:29.241846 | orchestrator | Thursday 05 February 2026 05:17:09 +0000 (0:00:01.131) 0:36:54.231 ***** 2026-02-05 05:17:29.241852 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-05 05:17:29.241859 | orchestrator | 2026-02-05 05:17:29.241865 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 05:17:29.241871 | orchestrator | Thursday 05 February 2026 05:17:10 +0000 (0:00:01.095) 0:36:55.326 ***** 2026-02-05 05:17:29.241878 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-05 05:17:29.241885 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-05 05:17:29.241891 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-05 05:17:29.241897 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-05 05:17:29.241904 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-05 05:17:29.241910 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-05 05:17:29.241916 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-05 05:17:29.241922 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-05 05:17:29.241929 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 05:17:29.241935 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 05:17:29.241941 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 05:17:29.241947 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 05:17:29.241961 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 05:17:29.241968 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 05:17:29.241974 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-05 05:17:29.241980 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-05 05:17:29.241986 | orchestrator | 2026-02-05 05:17:29.241993 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:17:29.242005 | orchestrator | Thursday 05 February 2026 05:17:17 +0000 (0:00:06.905) 0:37:02.232 ***** 2026-02-05 05:17:29.242011 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-05 05:17:29.242073 | orchestrator | 2026-02-05 05:17:29.242080 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-05 05:17:29.242086 | orchestrator | Thursday 05 February 2026 05:17:18 +0000 (0:00:01.442) 0:37:03.674 ***** 2026-02-05 05:17:29.242092 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:17:29.242100 | orchestrator | 2026-02-05 05:17:29.242107 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-05 05:17:29.242113 | orchestrator | Thursday 05 February 2026 05:17:20 +0000 (0:00:01.512) 0:37:05.187 ***** 2026-02-05 05:17:29.242119 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:17:29.242125 | orchestrator | 2026-02-05 05:17:29.242131 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:17:29.242137 | orchestrator | Thursday 05 February 2026 05:17:22 +0000 (0:00:02.005) 0:37:07.193 ***** 2026-02-05 05:17:29.242144 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.242150 | orchestrator | 2026-02-05 05:17:29.242224 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:17:29.242231 | orchestrator | Thursday 05 February 2026 05:17:23 +0000 (0:00:01.160) 0:37:08.353 ***** 2026-02-05 05:17:29.242237 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.242243 | orchestrator | 2026-02-05 05:17:29.242249 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:17:29.242256 | orchestrator | Thursday 05 February 2026 05:17:24 +0000 (0:00:01.135) 0:37:09.489 ***** 2026-02-05 05:17:29.242262 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.242268 | orchestrator | 2026-02-05 05:17:29.242275 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:17:29.242286 | orchestrator | Thursday 05 February 2026 05:17:25 +0000 (0:00:01.115) 0:37:10.604 ***** 2026-02-05 05:17:29.242297 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.242306 | orchestrator | 2026-02-05 05:17:29.242315 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:17:29.242326 | orchestrator | Thursday 05 February 2026 05:17:26 +0000 (0:00:01.116) 0:37:11.720 ***** 2026-02-05 05:17:29.242336 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.242346 | orchestrator | 2026-02-05 05:17:29.242356 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:17:29.242367 | orchestrator | Thursday 05 February 2026 05:17:28 +0000 (0:00:01.187) 0:37:12.908 ***** 2026-02-05 05:17:29.242376 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:17:29.242382 | orchestrator | 2026-02-05 05:17:29.242395 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:18:20.666493 | orchestrator | Thursday 05 February 2026 05:17:29 +0000 (0:00:01.139) 0:37:14.047 ***** 2026-02-05 05:18:20.666633 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.666657 | orchestrator | 2026-02-05 05:18:20.666675 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:18:20.666693 | orchestrator | Thursday 05 February 2026 05:17:30 +0000 (0:00:01.143) 0:37:15.191 ***** 2026-02-05 05:18:20.666708 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.666725 | orchestrator | 2026-02-05 05:18:20.666742 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:18:20.666760 | orchestrator | Thursday 05 February 2026 05:17:31 +0000 (0:00:01.112) 0:37:16.304 ***** 2026-02-05 05:18:20.666778 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.666796 | orchestrator | 2026-02-05 05:18:20.666844 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:18:20.666862 | orchestrator | Thursday 05 February 2026 05:17:32 +0000 (0:00:01.128) 0:37:17.433 ***** 2026-02-05 05:18:20.666879 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.666896 | orchestrator | 2026-02-05 05:18:20.666914 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:18:20.666932 | orchestrator | Thursday 05 February 2026 05:17:33 +0000 (0:00:01.142) 0:37:18.575 ***** 2026-02-05 05:18:20.666948 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:18:20.666967 | orchestrator | 2026-02-05 05:18:20.666983 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:18:20.666998 | orchestrator | Thursday 05 February 2026 05:17:34 +0000 (0:00:01.175) 0:37:19.750 ***** 2026-02-05 05:18:20.667016 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-05 05:18:20.667033 | orchestrator | 2026-02-05 05:18:20.667050 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:18:20.667065 | orchestrator | Thursday 05 February 2026 05:17:39 +0000 (0:00:04.539) 0:37:24.290 ***** 2026-02-05 05:18:20.667081 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:18:20.667098 | orchestrator | 2026-02-05 05:18:20.667113 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:18:20.667128 | orchestrator | Thursday 05 February 2026 05:17:40 +0000 (0:00:01.182) 0:37:25.473 ***** 2026-02-05 05:18:20.667196 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-05 05:18:20.667219 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-05 05:18:20.667236 | orchestrator | 2026-02-05 05:18:20.667252 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:18:20.667269 | orchestrator | Thursday 05 February 2026 05:17:48 +0000 (0:00:08.243) 0:37:33.716 ***** 2026-02-05 05:18:20.667284 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.667299 | orchestrator | 2026-02-05 05:18:20.667315 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:18:20.667329 | orchestrator | Thursday 05 February 2026 05:17:50 +0000 (0:00:01.130) 0:37:34.847 ***** 2026-02-05 05:18:20.667345 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.667390 | orchestrator | 2026-02-05 05:18:20.667406 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:18:20.667424 | orchestrator | Thursday 05 February 2026 05:17:51 +0000 (0:00:01.128) 0:37:35.976 ***** 2026-02-05 05:18:20.667440 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.667455 | orchestrator | 2026-02-05 05:18:20.667471 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:18:20.667488 | orchestrator | Thursday 05 February 2026 05:17:52 +0000 (0:00:01.122) 0:37:37.098 ***** 2026-02-05 05:18:20.667505 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.667520 | orchestrator | 2026-02-05 05:18:20.667536 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:18:20.667551 | orchestrator | Thursday 05 February 2026 05:17:53 +0000 (0:00:01.198) 0:37:38.297 ***** 2026-02-05 05:18:20.667560 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.667570 | orchestrator | 2026-02-05 05:18:20.667580 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:18:20.667604 | orchestrator | Thursday 05 February 2026 05:17:54 +0000 (0:00:01.141) 0:37:39.438 ***** 2026-02-05 05:18:20.667613 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:18:20.667623 | orchestrator | 2026-02-05 05:18:20.667632 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:18:20.667642 | orchestrator | Thursday 05 February 2026 05:17:55 +0000 (0:00:01.273) 0:37:40.711 ***** 2026-02-05 05:18:20.667652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:18:20.667662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:18:20.667671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:18:20.667681 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.667690 | orchestrator | 2026-02-05 05:18:20.667700 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:18:20.667732 | orchestrator | Thursday 05 February 2026 05:17:57 +0000 (0:00:01.400) 0:37:42.112 ***** 2026-02-05 05:18:20.667740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:18:20.667748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:18:20.667756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:18:20.667764 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.667771 | orchestrator | 2026-02-05 05:18:20.667779 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:18:20.667787 | orchestrator | Thursday 05 February 2026 05:17:58 +0000 (0:00:01.390) 0:37:43.502 ***** 2026-02-05 05:18:20.667795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:18:20.667803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:18:20.667811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:18:20.667819 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.667827 | orchestrator | 2026-02-05 05:18:20.667835 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:18:20.667842 | orchestrator | Thursday 05 February 2026 05:18:00 +0000 (0:00:01.407) 0:37:44.910 ***** 2026-02-05 05:18:20.667850 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:18:20.667858 | orchestrator | 2026-02-05 05:18:20.667866 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:18:20.667874 | orchestrator | Thursday 05 February 2026 05:18:01 +0000 (0:00:01.174) 0:37:46.085 ***** 2026-02-05 05:18:20.667882 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 05:18:20.667889 | orchestrator | 2026-02-05 05:18:20.667897 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:18:20.667905 | orchestrator | Thursday 05 February 2026 05:18:02 +0000 (0:00:01.323) 0:37:47.408 ***** 2026-02-05 05:18:20.667913 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:18:20.667921 | orchestrator | 2026-02-05 05:18:20.667929 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-05 05:18:20.667936 | orchestrator | Thursday 05 February 2026 05:18:04 +0000 (0:00:02.181) 0:37:49.590 ***** 2026-02-05 05:18:20.667944 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:18:20.667952 | orchestrator | 2026-02-05 05:18:20.667960 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-05 05:18:20.667967 | orchestrator | Thursday 05 February 2026 05:18:05 +0000 (0:00:01.134) 0:37:50.724 ***** 2026-02-05 05:18:20.667983 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:18:20.667992 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:18:20.668000 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:18:20.668007 | orchestrator | 2026-02-05 05:18:20.668015 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-05 05:18:20.668023 | orchestrator | Thursday 05 February 2026 05:18:07 +0000 (0:00:01.653) 0:37:52.378 ***** 2026-02-05 05:18:20.668037 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-02-05 05:18:20.668045 | orchestrator | 2026-02-05 05:18:20.668052 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-05 05:18:20.668060 | orchestrator | Thursday 05 February 2026 05:18:08 +0000 (0:00:01.436) 0:37:53.815 ***** 2026-02-05 05:18:20.668068 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.668076 | orchestrator | 2026-02-05 05:18:20.668084 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-05 05:18:20.668091 | orchestrator | Thursday 05 February 2026 05:18:10 +0000 (0:00:01.151) 0:37:54.966 ***** 2026-02-05 05:18:20.668099 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.668107 | orchestrator | 2026-02-05 05:18:20.668115 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-05 05:18:20.668123 | orchestrator | Thursday 05 February 2026 05:18:11 +0000 (0:00:01.144) 0:37:56.111 ***** 2026-02-05 05:18:20.668130 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:18:20.668138 | orchestrator | 2026-02-05 05:18:20.668146 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-05 05:18:20.668154 | orchestrator | Thursday 05 February 2026 05:18:12 +0000 (0:00:01.445) 0:37:57.556 ***** 2026-02-05 05:18:20.668198 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:18:20.668209 | orchestrator | 2026-02-05 05:18:20.668218 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-05 05:18:20.668226 | orchestrator | Thursday 05 February 2026 05:18:13 +0000 (0:00:01.180) 0:37:58.737 ***** 2026-02-05 05:18:20.668233 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-05 05:18:20.668242 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-05 05:18:20.668249 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-05 05:18:20.668257 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-05 05:18:20.668265 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-05 05:18:20.668273 | orchestrator | 2026-02-05 05:18:20.668281 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-05 05:18:20.668288 | orchestrator | Thursday 05 February 2026 05:18:18 +0000 (0:00:04.129) 0:38:02.866 ***** 2026-02-05 05:18:20.668296 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:18:20.668304 | orchestrator | 2026-02-05 05:18:20.668312 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-05 05:18:20.668324 | orchestrator | Thursday 05 February 2026 05:18:19 +0000 (0:00:01.099) 0:38:03.966 ***** 2026-02-05 05:18:20.668336 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-02-05 05:18:20.668352 | orchestrator | 2026-02-05 05:18:20.668372 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-05 05:19:29.141748 | orchestrator | Thursday 05 February 2026 05:18:20 +0000 (0:00:01.510) 0:38:05.476 ***** 2026-02-05 05:19:29.141873 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-05 05:19:29.141888 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-05 05:19:29.141898 | orchestrator | 2026-02-05 05:19:29.141910 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-05 05:19:29.141922 | orchestrator | Thursday 05 February 2026 05:18:22 +0000 (0:00:01.894) 0:38:07.371 ***** 2026-02-05 05:19:29.141932 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:19:29.141944 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 05:19:29.141954 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 05:19:29.141964 | orchestrator | 2026-02-05 05:19:29.141976 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:19:29.141982 | orchestrator | Thursday 05 February 2026 05:18:25 +0000 (0:00:03.340) 0:38:10.712 ***** 2026-02-05 05:19:29.142009 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-05 05:19:29.142061 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 05:19:29.142068 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:19:29.142074 | orchestrator | 2026-02-05 05:19:29.142081 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-05 05:19:29.142087 | orchestrator | Thursday 05 February 2026 05:18:27 +0000 (0:00:01.961) 0:38:12.673 ***** 2026-02-05 05:19:29.142094 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:19:29.142105 | orchestrator | 2026-02-05 05:19:29.142116 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-05 05:19:29.142126 | orchestrator | Thursday 05 February 2026 05:18:29 +0000 (0:00:01.232) 0:38:13.905 ***** 2026-02-05 05:19:29.142137 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:19:29.142148 | orchestrator | 2026-02-05 05:19:29.142158 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-05 05:19:29.142213 | orchestrator | Thursday 05 February 2026 05:18:30 +0000 (0:00:01.118) 0:38:15.024 ***** 2026-02-05 05:19:29.142221 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:19:29.142227 | orchestrator | 2026-02-05 05:19:29.142234 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-05 05:19:29.142240 | orchestrator | Thursday 05 February 2026 05:18:31 +0000 (0:00:01.129) 0:38:16.153 ***** 2026-02-05 05:19:29.142258 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-02-05 05:19:29.142265 | orchestrator | 2026-02-05 05:19:29.142271 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-05 05:19:29.142277 | orchestrator | Thursday 05 February 2026 05:18:32 +0000 (0:00:01.438) 0:38:17.591 ***** 2026-02-05 05:19:29.142283 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:19:29.142292 | orchestrator | 2026-02-05 05:19:29.142302 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-05 05:19:29.142313 | orchestrator | Thursday 05 February 2026 05:18:34 +0000 (0:00:01.532) 0:38:19.124 ***** 2026-02-05 05:19:29.142323 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:19:29.142333 | orchestrator | 2026-02-05 05:19:29.142342 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-05 05:19:29.142349 | orchestrator | Thursday 05 February 2026 05:18:37 +0000 (0:00:03.695) 0:38:22.820 ***** 2026-02-05 05:19:29.142356 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-02-05 05:19:29.142363 | orchestrator | 2026-02-05 05:19:29.142371 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-05 05:19:29.142378 | orchestrator | Thursday 05 February 2026 05:18:39 +0000 (0:00:01.521) 0:38:24.342 ***** 2026-02-05 05:19:29.142385 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:19:29.142391 | orchestrator | 2026-02-05 05:19:29.142399 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-05 05:19:29.142405 | orchestrator | Thursday 05 February 2026 05:18:41 +0000 (0:00:02.009) 0:38:26.351 ***** 2026-02-05 05:19:29.142412 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:19:29.142419 | orchestrator | 2026-02-05 05:19:29.142426 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-05 05:19:29.142433 | orchestrator | Thursday 05 February 2026 05:18:43 +0000 (0:00:02.117) 0:38:28.469 ***** 2026-02-05 05:19:29.142440 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:19:29.142447 | orchestrator | 2026-02-05 05:19:29.142454 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-05 05:19:29.142461 | orchestrator | Thursday 05 February 2026 05:18:45 +0000 (0:00:02.330) 0:38:30.800 ***** 2026-02-05 05:19:29.142469 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:19:29.142476 | orchestrator | 2026-02-05 05:19:29.142483 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-05 05:19:29.142490 | orchestrator | Thursday 05 February 2026 05:18:47 +0000 (0:00:01.115) 0:38:31.916 ***** 2026-02-05 05:19:29.142505 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:19:29.142512 | orchestrator | 2026-02-05 05:19:29.142519 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-05 05:19:29.142526 | orchestrator | Thursday 05 February 2026 05:18:48 +0000 (0:00:01.139) 0:38:33.056 ***** 2026-02-05 05:19:29.142534 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-02-05 05:19:29.142541 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-02-05 05:19:29.142548 | orchestrator | 2026-02-05 05:19:29.142555 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-05 05:19:29.142562 | orchestrator | Thursday 05 February 2026 05:18:50 +0000 (0:00:01.860) 0:38:34.916 ***** 2026-02-05 05:19:29.142569 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-02-05 05:19:29.142577 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-02-05 05:19:29.142584 | orchestrator | 2026-02-05 05:19:29.142591 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-05 05:19:29.142598 | orchestrator | Thursday 05 February 2026 05:18:53 +0000 (0:00:02.936) 0:38:37.853 ***** 2026-02-05 05:19:29.142605 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-05 05:19:29.142628 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-02-05 05:19:29.142636 | orchestrator | 2026-02-05 05:19:29.142643 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-05 05:19:29.142650 | orchestrator | Thursday 05 February 2026 05:18:57 +0000 (0:00:04.756) 0:38:42.609 ***** 2026-02-05 05:19:29.142657 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:19:29.142664 | orchestrator | 2026-02-05 05:19:29.142671 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-05 05:19:29.142678 | orchestrator | Thursday 05 February 2026 05:18:58 +0000 (0:00:01.201) 0:38:43.810 ***** 2026-02-05 05:19:29.142686 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:19:29.142693 | orchestrator | 2026-02-05 05:19:29.142700 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-05 05:19:29.142708 | orchestrator | Thursday 05 February 2026 05:19:00 +0000 (0:00:01.201) 0:38:45.012 ***** 2026-02-05 05:19:29.142715 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:19:29.142722 | orchestrator | 2026-02-05 05:19:29.142729 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-05 05:19:29.142736 | orchestrator | Thursday 05 February 2026 05:19:01 +0000 (0:00:01.219) 0:38:46.231 ***** 2026-02-05 05:19:29.142743 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:19:29.142750 | orchestrator | 2026-02-05 05:19:29.142757 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-05 05:19:29.142765 | orchestrator | Thursday 05 February 2026 05:19:02 +0000 (0:00:01.126) 0:38:47.357 ***** 2026-02-05 05:19:29.142772 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:19:29.142779 | orchestrator | 2026-02-05 05:19:29.142787 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-05 05:19:29.142794 | orchestrator | Thursday 05 February 2026 05:19:03 +0000 (0:00:01.183) 0:38:48.541 ***** 2026-02-05 05:19:29.142801 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-05 05:19:29.142809 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-05 05:19:29.142817 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-05 05:19:29.142824 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:19:29.142831 | orchestrator | 2026-02-05 05:19:29.142843 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-05 05:19:29.142850 | orchestrator | 2026-02-05 05:19:29.142857 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:19:29.142864 | orchestrator | Thursday 05 February 2026 05:19:15 +0000 (0:00:11.463) 0:39:00.005 ***** 2026-02-05 05:19:29.142872 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-05 05:19:29.142883 | orchestrator | 2026-02-05 05:19:29.142890 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 05:19:29.142897 | orchestrator | Thursday 05 February 2026 05:19:16 +0000 (0:00:01.146) 0:39:01.151 ***** 2026-02-05 05:19:29.142904 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:29.142912 | orchestrator | 2026-02-05 05:19:29.142921 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 05:19:29.142933 | orchestrator | Thursday 05 February 2026 05:19:17 +0000 (0:00:01.448) 0:39:02.600 ***** 2026-02-05 05:19:29.142940 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:29.142948 | orchestrator | 2026-02-05 05:19:29.142955 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:19:29.142962 | orchestrator | Thursday 05 February 2026 05:19:18 +0000 (0:00:01.137) 0:39:03.738 ***** 2026-02-05 05:19:29.142972 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:29.142981 | orchestrator | 2026-02-05 05:19:29.142989 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:19:29.142996 | orchestrator | Thursday 05 February 2026 05:19:20 +0000 (0:00:01.466) 0:39:05.204 ***** 2026-02-05 05:19:29.143003 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:29.143010 | orchestrator | 2026-02-05 05:19:29.143017 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 05:19:29.143024 | orchestrator | Thursday 05 February 2026 05:19:21 +0000 (0:00:01.112) 0:39:06.316 ***** 2026-02-05 05:19:29.143031 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:29.143038 | orchestrator | 2026-02-05 05:19:29.143045 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 05:19:29.143053 | orchestrator | Thursday 05 February 2026 05:19:22 +0000 (0:00:01.102) 0:39:07.419 ***** 2026-02-05 05:19:29.143060 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:29.143067 | orchestrator | 2026-02-05 05:19:29.143075 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 05:19:29.143082 | orchestrator | Thursday 05 February 2026 05:19:23 +0000 (0:00:01.137) 0:39:08.556 ***** 2026-02-05 05:19:29.143089 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:29.143096 | orchestrator | 2026-02-05 05:19:29.143104 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 05:19:29.143111 | orchestrator | Thursday 05 February 2026 05:19:24 +0000 (0:00:01.114) 0:39:09.671 ***** 2026-02-05 05:19:29.143118 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:29.143125 | orchestrator | 2026-02-05 05:19:29.143132 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 05:19:29.143140 | orchestrator | Thursday 05 February 2026 05:19:25 +0000 (0:00:01.123) 0:39:10.794 ***** 2026-02-05 05:19:29.143150 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:19:29.143160 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:19:29.143187 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:19:29.143194 | orchestrator | 2026-02-05 05:19:29.143202 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 05:19:29.143209 | orchestrator | Thursday 05 February 2026 05:19:27 +0000 (0:00:01.934) 0:39:12.729 ***** 2026-02-05 05:19:29.143220 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:51.554274 | orchestrator | 2026-02-05 05:19:51.554425 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 05:19:51.554457 | orchestrator | Thursday 05 February 2026 05:19:29 +0000 (0:00:01.223) 0:39:13.953 ***** 2026-02-05 05:19:51.554478 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:19:51.554500 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:19:51.554521 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:19:51.554541 | orchestrator | 2026-02-05 05:19:51.554561 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 05:19:51.554614 | orchestrator | Thursday 05 February 2026 05:19:32 +0000 (0:00:02.936) 0:39:16.889 ***** 2026-02-05 05:19:51.554637 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 05:19:51.554658 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 05:19:51.554677 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 05:19:51.554696 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:51.554714 | orchestrator | 2026-02-05 05:19:51.554731 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 05:19:51.554750 | orchestrator | Thursday 05 February 2026 05:19:33 +0000 (0:00:01.393) 0:39:18.283 ***** 2026-02-05 05:19:51.554770 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 05:19:51.554794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 05:19:51.554833 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 05:19:51.554854 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:51.554873 | orchestrator | 2026-02-05 05:19:51.554891 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 05:19:51.554910 | orchestrator | Thursday 05 February 2026 05:19:35 +0000 (0:00:01.614) 0:39:19.897 ***** 2026-02-05 05:19:51.554932 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:51.554956 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:51.554976 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:51.554996 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:51.555017 | orchestrator | 2026-02-05 05:19:51.555037 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 05:19:51.555058 | orchestrator | Thursday 05 February 2026 05:19:36 +0000 (0:00:01.179) 0:39:21.077 ***** 2026-02-05 05:19:51.555107 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 05:19:29.710970', 'end': '2026-02-05 05:19:29.787157', 'delta': '0:00:00.076187', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 05:19:51.555149 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 05:19:30.313963', 'end': '2026-02-05 05:19:30.360314', 'delta': '0:00:00.046351', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 05:19:51.555202 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9163e99c5c4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 05:19:30.878677', 'end': '2026-02-05 05:19:30.939090', 'delta': '0:00:00.060413', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9163e99c5c4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 05:19:51.555226 | orchestrator | 2026-02-05 05:19:51.555256 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 05:19:51.555278 | orchestrator | Thursday 05 February 2026 05:19:37 +0000 (0:00:01.178) 0:39:22.256 ***** 2026-02-05 05:19:51.555297 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:51.555317 | orchestrator | 2026-02-05 05:19:51.555336 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 05:19:51.555355 | orchestrator | Thursday 05 February 2026 05:19:38 +0000 (0:00:01.237) 0:39:23.493 ***** 2026-02-05 05:19:51.555375 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:51.555392 | orchestrator | 2026-02-05 05:19:51.555411 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 05:19:51.555431 | orchestrator | Thursday 05 February 2026 05:19:39 +0000 (0:00:01.207) 0:39:24.700 ***** 2026-02-05 05:19:51.555451 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:51.555471 | orchestrator | 2026-02-05 05:19:51.555492 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 05:19:51.555512 | orchestrator | Thursday 05 February 2026 05:19:40 +0000 (0:00:01.113) 0:39:25.813 ***** 2026-02-05 05:19:51.555532 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:19:51.555552 | orchestrator | 2026-02-05 05:19:51.555573 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:19:51.555593 | orchestrator | Thursday 05 February 2026 05:19:42 +0000 (0:00:01.966) 0:39:27.780 ***** 2026-02-05 05:19:51.555611 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:51.555629 | orchestrator | 2026-02-05 05:19:51.555647 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 05:19:51.555666 | orchestrator | Thursday 05 February 2026 05:19:44 +0000 (0:00:01.126) 0:39:28.907 ***** 2026-02-05 05:19:51.555685 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:51.555703 | orchestrator | 2026-02-05 05:19:51.555718 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 05:19:51.555734 | orchestrator | Thursday 05 February 2026 05:19:45 +0000 (0:00:01.097) 0:39:30.004 ***** 2026-02-05 05:19:51.555751 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:51.555783 | orchestrator | 2026-02-05 05:19:51.555803 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:19:51.555822 | orchestrator | Thursday 05 February 2026 05:19:46 +0000 (0:00:01.229) 0:39:31.234 ***** 2026-02-05 05:19:51.555842 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:51.555863 | orchestrator | 2026-02-05 05:19:51.555881 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 05:19:51.555900 | orchestrator | Thursday 05 February 2026 05:19:47 +0000 (0:00:01.125) 0:39:32.360 ***** 2026-02-05 05:19:51.555919 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:51.555939 | orchestrator | 2026-02-05 05:19:51.555959 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 05:19:51.555978 | orchestrator | Thursday 05 February 2026 05:19:48 +0000 (0:00:01.098) 0:39:33.458 ***** 2026-02-05 05:19:51.555996 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:51.556015 | orchestrator | 2026-02-05 05:19:51.556036 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 05:19:51.556055 | orchestrator | Thursday 05 February 2026 05:19:49 +0000 (0:00:01.072) 0:39:34.531 ***** 2026-02-05 05:19:51.556075 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:51.556094 | orchestrator | 2026-02-05 05:19:51.556114 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 05:19:51.556134 | orchestrator | Thursday 05 February 2026 05:19:50 +0000 (0:00:00.901) 0:39:35.433 ***** 2026-02-05 05:19:51.556153 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:51.556194 | orchestrator | 2026-02-05 05:19:51.556214 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 05:19:51.556248 | orchestrator | Thursday 05 February 2026 05:19:51 +0000 (0:00:00.929) 0:39:36.362 ***** 2026-02-05 05:19:53.580851 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:53.580932 | orchestrator | 2026-02-05 05:19:53.580943 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 05:19:53.580951 | orchestrator | Thursday 05 February 2026 05:19:52 +0000 (0:00:00.886) 0:39:37.248 ***** 2026-02-05 05:19:53.580959 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:53.580966 | orchestrator | 2026-02-05 05:19:53.580973 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 05:19:53.580980 | orchestrator | Thursday 05 February 2026 05:19:53 +0000 (0:00:00.952) 0:39:38.200 ***** 2026-02-05 05:19:53.580988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:19:53.580999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c', 'dm-uuid-LVM-5TLZe1Tgo1TKM8GkjUpfN78ieh5w0ANrQNgi2dmi5diYRe7Lgm9DH3wMJKHbVGFu'], 'uuids': ['4b1d437a-dc47-4238-b645-763e611994c7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu']}})  2026-02-05 05:19:53.581016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a', 'scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '64f88b59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:19:53.581051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-K9GKOz-fxxR-Pm8N-aWMy-HniX-e8kz-eif3cf', 'scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a', 'scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625']}})  2026-02-05 05:19:53.581060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:19:53.581067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:19:53.581086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:19:53.581094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:19:53.581101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ', 'dm-uuid-CRYPT-LUKS2-2c590a41d7cb49b2bfdc5ce322fde490-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:19:53.581111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:19:53.581122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625', 'dm-uuid-LVM-9Y06a2zVor1lRD1cyPlucPXWC0aPbN2JxLYAdcU08G9AXF4NeOKXZ9V1sHvTv2MQ'], 'uuids': ['2c590a41-d7cb-49b2-bfdc-5ce322fde490'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ']}})  2026-02-05 05:19:53.581129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Pz8pQL-5OmI-WkJt-J5Qa-2PBj-Qacj-FgSo8f', 'scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930', 'scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c']}})  2026-02-05 05:19:53.581135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:19:53.581154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5aaaa4a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:19:54.675598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:19:54.675677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:19:54.675688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu', 'dm-uuid-CRYPT-LUKS2-4b1d437adc474238b645763e611994c7-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:19:54.675698 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:54.675708 | orchestrator | 2026-02-05 05:19:54.675716 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 05:19:54.675723 | orchestrator | Thursday 05 February 2026 05:19:54 +0000 (0:00:01.094) 0:39:39.295 ***** 2026-02-05 05:19:54.675732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:54.675740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c', 'dm-uuid-LVM-5TLZe1Tgo1TKM8GkjUpfN78ieh5w0ANrQNgi2dmi5diYRe7Lgm9DH3wMJKHbVGFu'], 'uuids': ['4b1d437a-dc47-4238-b645-763e611994c7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:54.675763 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a', 'scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '64f88b59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:54.675802 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-K9GKOz-fxxR-Pm8N-aWMy-HniX-e8kz-eif3cf', 'scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a', 'scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:54.675812 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:54.675820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:54.675839 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:54.675853 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:54.675870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ', 'dm-uuid-CRYPT-LUKS2-2c590a41d7cb49b2bfdc5ce322fde490-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:59.614591 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:59.614682 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625', 'dm-uuid-LVM-9Y06a2zVor1lRD1cyPlucPXWC0aPbN2JxLYAdcU08G9AXF4NeOKXZ9V1sHvTv2MQ'], 'uuids': ['2c590a41-d7cb-49b2-bfdc-5ce322fde490'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:59.614694 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Pz8pQL-5OmI-WkJt-J5Qa-2PBj-Qacj-FgSo8f', 'scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930', 'scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:59.614705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:59.614742 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5aaaa4a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:59.614768 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:59.614776 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:59.614784 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu', 'dm-uuid-CRYPT-LUKS2-4b1d437adc474238b645763e611994c7-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:19:59.614797 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:19:59.614806 | orchestrator | 2026-02-05 05:19:59.614814 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 05:19:59.614823 | orchestrator | Thursday 05 February 2026 05:19:55 +0000 (0:00:01.147) 0:39:40.443 ***** 2026-02-05 05:19:59.614829 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:59.614838 | orchestrator | 2026-02-05 05:19:59.614844 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 05:19:59.614851 | orchestrator | Thursday 05 February 2026 05:19:57 +0000 (0:00:01.455) 0:39:41.898 ***** 2026-02-05 05:19:59.614858 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:59.614864 | orchestrator | 2026-02-05 05:19:59.614871 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:19:59.614881 | orchestrator | Thursday 05 February 2026 05:19:58 +0000 (0:00:01.088) 0:39:42.987 ***** 2026-02-05 05:19:59.614888 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:19:59.614895 | orchestrator | 2026-02-05 05:19:59.614902 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:19:59.614913 | orchestrator | Thursday 05 February 2026 05:19:59 +0000 (0:00:01.442) 0:39:44.429 ***** 2026-02-05 05:20:41.087450 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.087540 | orchestrator | 2026-02-05 05:20:41.087551 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:20:41.087560 | orchestrator | Thursday 05 February 2026 05:20:00 +0000 (0:00:01.102) 0:39:45.532 ***** 2026-02-05 05:20:41.087566 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.087573 | orchestrator | 2026-02-05 05:20:41.087579 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:20:41.087586 | orchestrator | Thursday 05 February 2026 05:20:02 +0000 (0:00:01.576) 0:39:47.109 ***** 2026-02-05 05:20:41.087593 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.087599 | orchestrator | 2026-02-05 05:20:41.087606 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 05:20:41.087613 | orchestrator | Thursday 05 February 2026 05:20:03 +0000 (0:00:01.154) 0:39:48.263 ***** 2026-02-05 05:20:41.087620 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-05 05:20:41.087626 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-05 05:20:41.087633 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-05 05:20:41.087639 | orchestrator | 2026-02-05 05:20:41.087645 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 05:20:41.087651 | orchestrator | Thursday 05 February 2026 05:20:05 +0000 (0:00:01.701) 0:39:49.965 ***** 2026-02-05 05:20:41.087658 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 05:20:41.087665 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 05:20:41.087671 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 05:20:41.087677 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.087683 | orchestrator | 2026-02-05 05:20:41.087689 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 05:20:41.087696 | orchestrator | Thursday 05 February 2026 05:20:06 +0000 (0:00:01.161) 0:39:51.126 ***** 2026-02-05 05:20:41.087702 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-05 05:20:41.087708 | orchestrator | 2026-02-05 05:20:41.087715 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:20:41.087723 | orchestrator | Thursday 05 February 2026 05:20:07 +0000 (0:00:01.117) 0:39:52.244 ***** 2026-02-05 05:20:41.087729 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.087735 | orchestrator | 2026-02-05 05:20:41.087742 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:20:41.087748 | orchestrator | Thursday 05 February 2026 05:20:08 +0000 (0:00:01.156) 0:39:53.400 ***** 2026-02-05 05:20:41.087771 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.087778 | orchestrator | 2026-02-05 05:20:41.087784 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:20:41.087790 | orchestrator | Thursday 05 February 2026 05:20:09 +0000 (0:00:01.149) 0:39:54.550 ***** 2026-02-05 05:20:41.087797 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.087803 | orchestrator | 2026-02-05 05:20:41.087809 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:20:41.087815 | orchestrator | Thursday 05 February 2026 05:20:10 +0000 (0:00:01.118) 0:39:55.669 ***** 2026-02-05 05:20:41.087822 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:20:41.087828 | orchestrator | 2026-02-05 05:20:41.087834 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:20:41.087840 | orchestrator | Thursday 05 February 2026 05:20:12 +0000 (0:00:01.215) 0:39:56.885 ***** 2026-02-05 05:20:41.087846 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 05:20:41.087853 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 05:20:41.087859 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 05:20:41.087865 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.087872 | orchestrator | 2026-02-05 05:20:41.087879 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:20:41.087887 | orchestrator | Thursday 05 February 2026 05:20:13 +0000 (0:00:01.378) 0:39:58.263 ***** 2026-02-05 05:20:41.087897 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 05:20:41.087909 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 05:20:41.087921 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 05:20:41.087933 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.087945 | orchestrator | 2026-02-05 05:20:41.087956 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:20:41.087968 | orchestrator | Thursday 05 February 2026 05:20:15 +0000 (0:00:01.687) 0:39:59.950 ***** 2026-02-05 05:20:41.087980 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 05:20:41.087991 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 05:20:41.088001 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 05:20:41.088012 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.088023 | orchestrator | 2026-02-05 05:20:41.088036 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:20:41.088047 | orchestrator | Thursday 05 February 2026 05:20:16 +0000 (0:00:01.677) 0:40:01.628 ***** 2026-02-05 05:20:41.088073 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:20:41.088087 | orchestrator | 2026-02-05 05:20:41.088100 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:20:41.088113 | orchestrator | Thursday 05 February 2026 05:20:17 +0000 (0:00:01.124) 0:40:02.753 ***** 2026-02-05 05:20:41.088142 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 05:20:41.088156 | orchestrator | 2026-02-05 05:20:41.088168 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 05:20:41.088203 | orchestrator | Thursday 05 February 2026 05:20:19 +0000 (0:00:01.292) 0:40:04.045 ***** 2026-02-05 05:20:41.088234 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:20:41.088249 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:20:41.088261 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:20:41.088273 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:20:41.088286 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-05 05:20:41.088298 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:20:41.088319 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:20:41.088326 | orchestrator | 2026-02-05 05:20:41.088334 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 05:20:41.088341 | orchestrator | Thursday 05 February 2026 05:20:21 +0000 (0:00:01.796) 0:40:05.842 ***** 2026-02-05 05:20:41.088348 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:20:41.088355 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:20:41.088362 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:20:41.088369 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:20:41.088377 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-05 05:20:41.088384 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:20:41.088391 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:20:41.088398 | orchestrator | 2026-02-05 05:20:41.088406 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-05 05:20:41.088413 | orchestrator | Thursday 05 February 2026 05:20:23 +0000 (0:00:02.178) 0:40:08.021 ***** 2026-02-05 05:20:41.088420 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:20:41.088428 | orchestrator | 2026-02-05 05:20:41.088435 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-05 05:20:41.088442 | orchestrator | Thursday 05 February 2026 05:20:24 +0000 (0:00:01.135) 0:40:09.156 ***** 2026-02-05 05:20:41.088449 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:20:41.088457 | orchestrator | 2026-02-05 05:20:41.088464 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-05 05:20:41.088471 | orchestrator | Thursday 05 February 2026 05:20:25 +0000 (0:00:00.769) 0:40:09.925 ***** 2026-02-05 05:20:41.088479 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:20:41.088490 | orchestrator | 2026-02-05 05:20:41.088502 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-05 05:20:41.088513 | orchestrator | Thursday 05 February 2026 05:20:26 +0000 (0:00:00.910) 0:40:10.836 ***** 2026-02-05 05:20:41.088525 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-05 05:20:41.088537 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-05 05:20:41.088549 | orchestrator | 2026-02-05 05:20:41.088560 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:20:41.088567 | orchestrator | Thursday 05 February 2026 05:20:29 +0000 (0:00:03.779) 0:40:14.615 ***** 2026-02-05 05:20:41.088574 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-05 05:20:41.088582 | orchestrator | 2026-02-05 05:20:41.088589 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 05:20:41.088597 | orchestrator | Thursday 05 February 2026 05:20:30 +0000 (0:00:01.113) 0:40:15.728 ***** 2026-02-05 05:20:41.088604 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-05 05:20:41.088611 | orchestrator | 2026-02-05 05:20:41.088619 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 05:20:41.088626 | orchestrator | Thursday 05 February 2026 05:20:31 +0000 (0:00:01.079) 0:40:16.808 ***** 2026-02-05 05:20:41.088633 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.088641 | orchestrator | 2026-02-05 05:20:41.088648 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 05:20:41.088655 | orchestrator | Thursday 05 February 2026 05:20:33 +0000 (0:00:01.111) 0:40:17.920 ***** 2026-02-05 05:20:41.088662 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:20:41.088670 | orchestrator | 2026-02-05 05:20:41.088677 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 05:20:41.088684 | orchestrator | Thursday 05 February 2026 05:20:34 +0000 (0:00:01.533) 0:40:19.453 ***** 2026-02-05 05:20:41.088697 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:20:41.088705 | orchestrator | 2026-02-05 05:20:41.088712 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 05:20:41.088719 | orchestrator | Thursday 05 February 2026 05:20:36 +0000 (0:00:01.563) 0:40:21.017 ***** 2026-02-05 05:20:41.088727 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:20:41.088734 | orchestrator | 2026-02-05 05:20:41.088741 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 05:20:41.088749 | orchestrator | Thursday 05 February 2026 05:20:37 +0000 (0:00:01.526) 0:40:22.543 ***** 2026-02-05 05:20:41.088756 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.088763 | orchestrator | 2026-02-05 05:20:41.088770 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 05:20:41.088778 | orchestrator | Thursday 05 February 2026 05:20:38 +0000 (0:00:01.110) 0:40:23.653 ***** 2026-02-05 05:20:41.088791 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.088799 | orchestrator | 2026-02-05 05:20:41.088806 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 05:20:41.088813 | orchestrator | Thursday 05 February 2026 05:20:39 +0000 (0:00:01.111) 0:40:24.765 ***** 2026-02-05 05:20:41.088821 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:20:41.088828 | orchestrator | 2026-02-05 05:20:41.088843 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 05:21:20.782445 | orchestrator | Thursday 05 February 2026 05:20:41 +0000 (0:00:01.129) 0:40:25.895 ***** 2026-02-05 05:21:20.782532 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:21:20.782541 | orchestrator | 2026-02-05 05:21:20.782549 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 05:21:20.782555 | orchestrator | Thursday 05 February 2026 05:20:42 +0000 (0:00:01.549) 0:40:27.445 ***** 2026-02-05 05:21:20.782560 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:21:20.782565 | orchestrator | 2026-02-05 05:21:20.782571 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 05:21:20.782577 | orchestrator | Thursday 05 February 2026 05:20:44 +0000 (0:00:01.582) 0:40:29.027 ***** 2026-02-05 05:21:20.782582 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782588 | orchestrator | 2026-02-05 05:21:20.782593 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:21:20.782599 | orchestrator | Thursday 05 February 2026 05:20:44 +0000 (0:00:00.750) 0:40:29.778 ***** 2026-02-05 05:21:20.782604 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782609 | orchestrator | 2026-02-05 05:21:20.782614 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:21:20.782620 | orchestrator | Thursday 05 February 2026 05:20:45 +0000 (0:00:00.773) 0:40:30.552 ***** 2026-02-05 05:21:20.782625 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:21:20.782630 | orchestrator | 2026-02-05 05:21:20.782635 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:21:20.782640 | orchestrator | Thursday 05 February 2026 05:20:46 +0000 (0:00:00.796) 0:40:31.348 ***** 2026-02-05 05:21:20.782645 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:21:20.782650 | orchestrator | 2026-02-05 05:21:20.782656 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:21:20.782661 | orchestrator | Thursday 05 February 2026 05:20:47 +0000 (0:00:00.772) 0:40:32.121 ***** 2026-02-05 05:21:20.782666 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:21:20.782671 | orchestrator | 2026-02-05 05:21:20.782676 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:21:20.782682 | orchestrator | Thursday 05 February 2026 05:20:48 +0000 (0:00:00.827) 0:40:32.948 ***** 2026-02-05 05:21:20.782687 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782693 | orchestrator | 2026-02-05 05:21:20.782698 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:21:20.782703 | orchestrator | Thursday 05 February 2026 05:20:48 +0000 (0:00:00.795) 0:40:33.743 ***** 2026-02-05 05:21:20.782724 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782729 | orchestrator | 2026-02-05 05:21:20.782734 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:21:20.782739 | orchestrator | Thursday 05 February 2026 05:20:49 +0000 (0:00:00.801) 0:40:34.545 ***** 2026-02-05 05:21:20.782744 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782749 | orchestrator | 2026-02-05 05:21:20.782755 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:21:20.782760 | orchestrator | Thursday 05 February 2026 05:20:50 +0000 (0:00:00.811) 0:40:35.357 ***** 2026-02-05 05:21:20.782765 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:21:20.782770 | orchestrator | 2026-02-05 05:21:20.782775 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:21:20.782780 | orchestrator | Thursday 05 February 2026 05:20:51 +0000 (0:00:00.781) 0:40:36.138 ***** 2026-02-05 05:21:20.782785 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:21:20.782790 | orchestrator | 2026-02-05 05:21:20.782796 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:21:20.782801 | orchestrator | Thursday 05 February 2026 05:20:52 +0000 (0:00:00.814) 0:40:36.953 ***** 2026-02-05 05:21:20.782806 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782811 | orchestrator | 2026-02-05 05:21:20.782816 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:21:20.782821 | orchestrator | Thursday 05 February 2026 05:20:52 +0000 (0:00:00.803) 0:40:37.756 ***** 2026-02-05 05:21:20.782826 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782831 | orchestrator | 2026-02-05 05:21:20.782836 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:21:20.782841 | orchestrator | Thursday 05 February 2026 05:20:53 +0000 (0:00:00.820) 0:40:38.577 ***** 2026-02-05 05:21:20.782846 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782851 | orchestrator | 2026-02-05 05:21:20.782857 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:21:20.782862 | orchestrator | Thursday 05 February 2026 05:20:54 +0000 (0:00:00.745) 0:40:39.322 ***** 2026-02-05 05:21:20.782867 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782872 | orchestrator | 2026-02-05 05:21:20.782877 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:21:20.782882 | orchestrator | Thursday 05 February 2026 05:20:55 +0000 (0:00:00.751) 0:40:40.074 ***** 2026-02-05 05:21:20.782887 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782892 | orchestrator | 2026-02-05 05:21:20.782898 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:21:20.782903 | orchestrator | Thursday 05 February 2026 05:20:55 +0000 (0:00:00.743) 0:40:40.818 ***** 2026-02-05 05:21:20.782908 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782913 | orchestrator | 2026-02-05 05:21:20.782918 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:21:20.782923 | orchestrator | Thursday 05 February 2026 05:20:56 +0000 (0:00:00.764) 0:40:41.583 ***** 2026-02-05 05:21:20.782928 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782933 | orchestrator | 2026-02-05 05:21:20.782939 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:21:20.782951 | orchestrator | Thursday 05 February 2026 05:20:57 +0000 (0:00:00.796) 0:40:42.379 ***** 2026-02-05 05:21:20.782957 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782962 | orchestrator | 2026-02-05 05:21:20.782967 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:21:20.782972 | orchestrator | Thursday 05 February 2026 05:20:58 +0000 (0:00:00.784) 0:40:43.164 ***** 2026-02-05 05:21:20.782987 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.782993 | orchestrator | 2026-02-05 05:21:20.782998 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:21:20.783009 | orchestrator | Thursday 05 February 2026 05:20:59 +0000 (0:00:00.773) 0:40:43.938 ***** 2026-02-05 05:21:20.783014 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.783019 | orchestrator | 2026-02-05 05:21:20.783025 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:21:20.783032 | orchestrator | Thursday 05 February 2026 05:20:59 +0000 (0:00:00.765) 0:40:44.703 ***** 2026-02-05 05:21:20.783038 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.783044 | orchestrator | 2026-02-05 05:21:20.783050 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:21:20.783056 | orchestrator | Thursday 05 February 2026 05:21:00 +0000 (0:00:00.803) 0:40:45.507 ***** 2026-02-05 05:21:20.783062 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.783068 | orchestrator | 2026-02-05 05:21:20.783074 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:21:20.783079 | orchestrator | Thursday 05 February 2026 05:21:01 +0000 (0:00:00.804) 0:40:46.312 ***** 2026-02-05 05:21:20.783086 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:21:20.783091 | orchestrator | 2026-02-05 05:21:20.783097 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:21:20.783103 | orchestrator | Thursday 05 February 2026 05:21:03 +0000 (0:00:01.623) 0:40:47.935 ***** 2026-02-05 05:21:20.783109 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:21:20.783116 | orchestrator | 2026-02-05 05:21:20.783122 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:21:20.783128 | orchestrator | Thursday 05 February 2026 05:21:04 +0000 (0:00:01.887) 0:40:49.823 ***** 2026-02-05 05:21:20.783133 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-05 05:21:20.783140 | orchestrator | 2026-02-05 05:21:20.783146 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 05:21:20.783153 | orchestrator | Thursday 05 February 2026 05:21:06 +0000 (0:00:01.135) 0:40:50.958 ***** 2026-02-05 05:21:20.783158 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.783164 | orchestrator | 2026-02-05 05:21:20.783170 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 05:21:20.783176 | orchestrator | Thursday 05 February 2026 05:21:07 +0000 (0:00:01.119) 0:40:52.077 ***** 2026-02-05 05:21:20.783196 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.783202 | orchestrator | 2026-02-05 05:21:20.783208 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 05:21:20.783214 | orchestrator | Thursday 05 February 2026 05:21:08 +0000 (0:00:01.152) 0:40:53.230 ***** 2026-02-05 05:21:20.783220 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 05:21:20.783226 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 05:21:20.783232 | orchestrator | 2026-02-05 05:21:20.783238 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 05:21:20.783244 | orchestrator | Thursday 05 February 2026 05:21:10 +0000 (0:00:01.849) 0:40:55.080 ***** 2026-02-05 05:21:20.783250 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:21:20.783256 | orchestrator | 2026-02-05 05:21:20.783262 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 05:21:20.783268 | orchestrator | Thursday 05 February 2026 05:21:11 +0000 (0:00:01.454) 0:40:56.534 ***** 2026-02-05 05:21:20.783274 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.783279 | orchestrator | 2026-02-05 05:21:20.783286 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 05:21:20.783292 | orchestrator | Thursday 05 February 2026 05:21:12 +0000 (0:00:01.123) 0:40:57.657 ***** 2026-02-05 05:21:20.783297 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.783303 | orchestrator | 2026-02-05 05:21:20.783309 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:21:20.783315 | orchestrator | Thursday 05 February 2026 05:21:13 +0000 (0:00:00.811) 0:40:58.468 ***** 2026-02-05 05:21:20.783326 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.783332 | orchestrator | 2026-02-05 05:21:20.783338 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:21:20.783344 | orchestrator | Thursday 05 February 2026 05:21:14 +0000 (0:00:00.788) 0:40:59.257 ***** 2026-02-05 05:21:20.783349 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-05 05:21:20.783355 | orchestrator | 2026-02-05 05:21:20.783361 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 05:21:20.783367 | orchestrator | Thursday 05 February 2026 05:21:15 +0000 (0:00:01.123) 0:41:00.380 ***** 2026-02-05 05:21:20.783374 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:21:20.783379 | orchestrator | 2026-02-05 05:21:20.783385 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 05:21:20.783390 | orchestrator | Thursday 05 February 2026 05:21:17 +0000 (0:00:01.723) 0:41:02.104 ***** 2026-02-05 05:21:20.783396 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 05:21:20.783401 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 05:21:20.783406 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 05:21:20.783411 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.783416 | orchestrator | 2026-02-05 05:21:20.783424 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 05:21:20.783429 | orchestrator | Thursday 05 February 2026 05:21:18 +0000 (0:00:01.170) 0:41:03.275 ***** 2026-02-05 05:21:20.783434 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:21:20.783439 | orchestrator | 2026-02-05 05:21:20.783444 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 05:21:20.783450 | orchestrator | Thursday 05 February 2026 05:21:19 +0000 (0:00:01.106) 0:41:04.381 ***** 2026-02-05 05:21:20.783458 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.661555 | orchestrator | 2026-02-05 05:22:03.661689 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 05:22:03.661709 | orchestrator | Thursday 05 February 2026 05:21:20 +0000 (0:00:01.211) 0:41:05.593 ***** 2026-02-05 05:22:03.661722 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.661735 | orchestrator | 2026-02-05 05:22:03.661746 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 05:22:03.661758 | orchestrator | Thursday 05 February 2026 05:21:21 +0000 (0:00:01.119) 0:41:06.713 ***** 2026-02-05 05:22:03.661769 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.661781 | orchestrator | 2026-02-05 05:22:03.661792 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 05:22:03.661804 | orchestrator | Thursday 05 February 2026 05:21:23 +0000 (0:00:01.124) 0:41:07.837 ***** 2026-02-05 05:22:03.661815 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.661826 | orchestrator | 2026-02-05 05:22:03.661837 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:22:03.661847 | orchestrator | Thursday 05 February 2026 05:21:23 +0000 (0:00:00.762) 0:41:08.599 ***** 2026-02-05 05:22:03.661867 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:22:03.661885 | orchestrator | 2026-02-05 05:22:03.661904 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:22:03.661924 | orchestrator | Thursday 05 February 2026 05:21:25 +0000 (0:00:02.183) 0:41:10.783 ***** 2026-02-05 05:22:03.661944 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:22:03.661963 | orchestrator | 2026-02-05 05:22:03.661981 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:22:03.661994 | orchestrator | Thursday 05 February 2026 05:21:26 +0000 (0:00:00.774) 0:41:11.557 ***** 2026-02-05 05:22:03.662079 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-05 05:22:03.662100 | orchestrator | 2026-02-05 05:22:03.662119 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 05:22:03.662169 | orchestrator | Thursday 05 February 2026 05:21:27 +0000 (0:00:01.129) 0:41:12.687 ***** 2026-02-05 05:22:03.662214 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.662233 | orchestrator | 2026-02-05 05:22:03.662249 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 05:22:03.662267 | orchestrator | Thursday 05 February 2026 05:21:29 +0000 (0:00:01.189) 0:41:13.876 ***** 2026-02-05 05:22:03.662287 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.662305 | orchestrator | 2026-02-05 05:22:03.662325 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 05:22:03.662344 | orchestrator | Thursday 05 February 2026 05:21:30 +0000 (0:00:01.152) 0:41:15.029 ***** 2026-02-05 05:22:03.662362 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.662379 | orchestrator | 2026-02-05 05:22:03.662390 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 05:22:03.662401 | orchestrator | Thursday 05 February 2026 05:21:31 +0000 (0:00:01.145) 0:41:16.174 ***** 2026-02-05 05:22:03.662412 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.662424 | orchestrator | 2026-02-05 05:22:03.662434 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 05:22:03.662445 | orchestrator | Thursday 05 February 2026 05:21:32 +0000 (0:00:01.145) 0:41:17.319 ***** 2026-02-05 05:22:03.662456 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.662467 | orchestrator | 2026-02-05 05:22:03.662477 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 05:22:03.662488 | orchestrator | Thursday 05 February 2026 05:21:33 +0000 (0:00:01.132) 0:41:18.452 ***** 2026-02-05 05:22:03.662498 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.662509 | orchestrator | 2026-02-05 05:22:03.662520 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 05:22:03.662530 | orchestrator | Thursday 05 February 2026 05:21:34 +0000 (0:00:01.156) 0:41:19.608 ***** 2026-02-05 05:22:03.662541 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.662551 | orchestrator | 2026-02-05 05:22:03.662562 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 05:22:03.662572 | orchestrator | Thursday 05 February 2026 05:21:35 +0000 (0:00:01.164) 0:41:20.773 ***** 2026-02-05 05:22:03.662583 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.662593 | orchestrator | 2026-02-05 05:22:03.662604 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 05:22:03.662615 | orchestrator | Thursday 05 February 2026 05:21:37 +0000 (0:00:01.124) 0:41:21.898 ***** 2026-02-05 05:22:03.662625 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:22:03.662636 | orchestrator | 2026-02-05 05:22:03.662646 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:22:03.662657 | orchestrator | Thursday 05 February 2026 05:21:37 +0000 (0:00:00.842) 0:41:22.741 ***** 2026-02-05 05:22:03.662668 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-05 05:22:03.662679 | orchestrator | 2026-02-05 05:22:03.662690 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 05:22:03.662701 | orchestrator | Thursday 05 February 2026 05:21:39 +0000 (0:00:01.083) 0:41:23.824 ***** 2026-02-05 05:22:03.662711 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-05 05:22:03.662722 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-05 05:22:03.662733 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-05 05:22:03.662743 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-05 05:22:03.662769 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-05 05:22:03.662781 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-05 05:22:03.662791 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-05 05:22:03.662801 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-05 05:22:03.662825 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 05:22:03.662858 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 05:22:03.662869 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 05:22:03.662880 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 05:22:03.662891 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 05:22:03.662902 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 05:22:03.662912 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-05 05:22:03.662923 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-05 05:22:03.662934 | orchestrator | 2026-02-05 05:22:03.662944 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:22:03.662955 | orchestrator | Thursday 05 February 2026 05:21:45 +0000 (0:00:06.580) 0:41:30.405 ***** 2026-02-05 05:22:03.662965 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-05 05:22:03.662976 | orchestrator | 2026-02-05 05:22:03.662986 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-05 05:22:03.662997 | orchestrator | Thursday 05 February 2026 05:21:46 +0000 (0:00:01.126) 0:41:31.532 ***** 2026-02-05 05:22:03.663008 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:22:03.663020 | orchestrator | 2026-02-05 05:22:03.663031 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-05 05:22:03.663041 | orchestrator | Thursday 05 February 2026 05:21:48 +0000 (0:00:01.512) 0:41:33.045 ***** 2026-02-05 05:22:03.663052 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:22:03.663063 | orchestrator | 2026-02-05 05:22:03.663074 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:22:03.663085 | orchestrator | Thursday 05 February 2026 05:21:49 +0000 (0:00:01.700) 0:41:34.745 ***** 2026-02-05 05:22:03.663095 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.663106 | orchestrator | 2026-02-05 05:22:03.663117 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:22:03.663127 | orchestrator | Thursday 05 February 2026 05:21:50 +0000 (0:00:00.802) 0:41:35.547 ***** 2026-02-05 05:22:03.663138 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.663149 | orchestrator | 2026-02-05 05:22:03.663159 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:22:03.663170 | orchestrator | Thursday 05 February 2026 05:21:51 +0000 (0:00:00.763) 0:41:36.310 ***** 2026-02-05 05:22:03.663181 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.663251 | orchestrator | 2026-02-05 05:22:03.663262 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:22:03.663273 | orchestrator | Thursday 05 February 2026 05:21:52 +0000 (0:00:00.777) 0:41:37.088 ***** 2026-02-05 05:22:03.663284 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.663294 | orchestrator | 2026-02-05 05:22:03.663305 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:22:03.663316 | orchestrator | Thursday 05 February 2026 05:21:53 +0000 (0:00:00.824) 0:41:37.912 ***** 2026-02-05 05:22:03.663326 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.663337 | orchestrator | 2026-02-05 05:22:03.663348 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:22:03.663359 | orchestrator | Thursday 05 February 2026 05:21:53 +0000 (0:00:00.746) 0:41:38.659 ***** 2026-02-05 05:22:03.663370 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.663380 | orchestrator | 2026-02-05 05:22:03.663391 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:22:03.663411 | orchestrator | Thursday 05 February 2026 05:21:54 +0000 (0:00:00.775) 0:41:39.435 ***** 2026-02-05 05:22:03.663421 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.663432 | orchestrator | 2026-02-05 05:22:03.663443 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:22:03.663454 | orchestrator | Thursday 05 February 2026 05:21:55 +0000 (0:00:00.782) 0:41:40.218 ***** 2026-02-05 05:22:03.663465 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.663476 | orchestrator | 2026-02-05 05:22:03.663487 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:22:03.663498 | orchestrator | Thursday 05 February 2026 05:21:56 +0000 (0:00:00.822) 0:41:41.040 ***** 2026-02-05 05:22:03.663509 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.663519 | orchestrator | 2026-02-05 05:22:03.663530 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:22:03.663541 | orchestrator | Thursday 05 February 2026 05:21:56 +0000 (0:00:00.776) 0:41:41.816 ***** 2026-02-05 05:22:03.663552 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:03.663563 | orchestrator | 2026-02-05 05:22:03.663573 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:22:03.663584 | orchestrator | Thursday 05 February 2026 05:21:57 +0000 (0:00:00.772) 0:41:42.589 ***** 2026-02-05 05:22:03.663595 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:22:03.663606 | orchestrator | 2026-02-05 05:22:03.663616 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:22:03.663627 | orchestrator | Thursday 05 February 2026 05:21:58 +0000 (0:00:00.820) 0:41:43.409 ***** 2026-02-05 05:22:03.663644 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-05 05:22:03.663655 | orchestrator | 2026-02-05 05:22:03.663666 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:22:03.663677 | orchestrator | Thursday 05 February 2026 05:22:02 +0000 (0:00:04.262) 0:41:47.672 ***** 2026-02-05 05:22:03.663696 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:22:44.748648 | orchestrator | 2026-02-05 05:22:44.748756 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:22:44.748770 | orchestrator | Thursday 05 February 2026 05:22:03 +0000 (0:00:00.799) 0:41:48.472 ***** 2026-02-05 05:22:44.748781 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-05 05:22:44.748792 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-05 05:22:44.748801 | orchestrator | 2026-02-05 05:22:44.748810 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:22:44.748817 | orchestrator | Thursday 05 February 2026 05:22:11 +0000 (0:00:07.586) 0:41:56.058 ***** 2026-02-05 05:22:44.748825 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.748834 | orchestrator | 2026-02-05 05:22:44.748843 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:22:44.748851 | orchestrator | Thursday 05 February 2026 05:22:12 +0000 (0:00:00.775) 0:41:56.834 ***** 2026-02-05 05:22:44.748858 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.748866 | orchestrator | 2026-02-05 05:22:44.748874 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:22:44.748884 | orchestrator | Thursday 05 February 2026 05:22:12 +0000 (0:00:00.763) 0:41:57.597 ***** 2026-02-05 05:22:44.748914 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.748921 | orchestrator | 2026-02-05 05:22:44.748928 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:22:44.748936 | orchestrator | Thursday 05 February 2026 05:22:13 +0000 (0:00:00.793) 0:41:58.391 ***** 2026-02-05 05:22:44.748943 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.748950 | orchestrator | 2026-02-05 05:22:44.748958 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:22:44.748966 | orchestrator | Thursday 05 February 2026 05:22:14 +0000 (0:00:00.789) 0:41:59.181 ***** 2026-02-05 05:22:44.748973 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.748980 | orchestrator | 2026-02-05 05:22:44.748987 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:22:44.748995 | orchestrator | Thursday 05 February 2026 05:22:15 +0000 (0:00:00.767) 0:41:59.948 ***** 2026-02-05 05:22:44.749002 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:22:44.749012 | orchestrator | 2026-02-05 05:22:44.749019 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:22:44.749027 | orchestrator | Thursday 05 February 2026 05:22:16 +0000 (0:00:00.896) 0:42:00.845 ***** 2026-02-05 05:22:44.749034 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 05:22:44.749043 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 05:22:44.749050 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 05:22:44.749057 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.749064 | orchestrator | 2026-02-05 05:22:44.749071 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:22:44.749078 | orchestrator | Thursday 05 February 2026 05:22:17 +0000 (0:00:01.053) 0:42:01.899 ***** 2026-02-05 05:22:44.749085 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 05:22:44.749093 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 05:22:44.749100 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 05:22:44.749107 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.749114 | orchestrator | 2026-02-05 05:22:44.749121 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:22:44.749128 | orchestrator | Thursday 05 February 2026 05:22:18 +0000 (0:00:01.080) 0:42:02.979 ***** 2026-02-05 05:22:44.749136 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 05:22:44.749143 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 05:22:44.749151 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 05:22:44.749158 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.749165 | orchestrator | 2026-02-05 05:22:44.749173 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:22:44.749180 | orchestrator | Thursday 05 February 2026 05:22:19 +0000 (0:00:01.089) 0:42:04.069 ***** 2026-02-05 05:22:44.749188 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:22:44.749231 | orchestrator | 2026-02-05 05:22:44.749239 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:22:44.749246 | orchestrator | Thursday 05 February 2026 05:22:20 +0000 (0:00:00.806) 0:42:04.876 ***** 2026-02-05 05:22:44.749254 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 05:22:44.749261 | orchestrator | 2026-02-05 05:22:44.749282 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:22:44.749290 | orchestrator | Thursday 05 February 2026 05:22:21 +0000 (0:00:01.006) 0:42:05.883 ***** 2026-02-05 05:22:44.749297 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:22:44.749305 | orchestrator | 2026-02-05 05:22:44.749312 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-05 05:22:44.749320 | orchestrator | Thursday 05 February 2026 05:22:22 +0000 (0:00:01.397) 0:42:07.280 ***** 2026-02-05 05:22:44.749334 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:22:44.749341 | orchestrator | 2026-02-05 05:22:44.749366 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-05 05:22:44.749373 | orchestrator | Thursday 05 February 2026 05:22:23 +0000 (0:00:00.792) 0:42:08.073 ***** 2026-02-05 05:22:44.749381 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:22:44.749389 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:22:44.749396 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:22:44.749403 | orchestrator | 2026-02-05 05:22:44.749410 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-05 05:22:44.749418 | orchestrator | Thursday 05 February 2026 05:22:24 +0000 (0:00:01.583) 0:42:09.656 ***** 2026-02-05 05:22:44.749425 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-02-05 05:22:44.749432 | orchestrator | 2026-02-05 05:22:44.749440 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-05 05:22:44.749447 | orchestrator | Thursday 05 February 2026 05:22:25 +0000 (0:00:01.127) 0:42:10.783 ***** 2026-02-05 05:22:44.749455 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.749462 | orchestrator | 2026-02-05 05:22:44.749469 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-05 05:22:44.749477 | orchestrator | Thursday 05 February 2026 05:22:27 +0000 (0:00:01.110) 0:42:11.894 ***** 2026-02-05 05:22:44.749484 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.749491 | orchestrator | 2026-02-05 05:22:44.749499 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-05 05:22:44.749506 | orchestrator | Thursday 05 February 2026 05:22:28 +0000 (0:00:01.147) 0:42:13.042 ***** 2026-02-05 05:22:44.749514 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:22:44.749522 | orchestrator | 2026-02-05 05:22:44.749529 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-05 05:22:44.749537 | orchestrator | Thursday 05 February 2026 05:22:29 +0000 (0:00:01.433) 0:42:14.475 ***** 2026-02-05 05:22:44.749544 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:22:44.749552 | orchestrator | 2026-02-05 05:22:44.749559 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-05 05:22:44.749566 | orchestrator | Thursday 05 February 2026 05:22:30 +0000 (0:00:01.157) 0:42:15.633 ***** 2026-02-05 05:22:44.749573 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-05 05:22:44.749581 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-05 05:22:44.749588 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-05 05:22:44.749595 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-05 05:22:44.749602 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-05 05:22:44.749610 | orchestrator | 2026-02-05 05:22:44.749617 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-05 05:22:44.749624 | orchestrator | Thursday 05 February 2026 05:22:33 +0000 (0:00:02.630) 0:42:18.264 ***** 2026-02-05 05:22:44.749631 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.749639 | orchestrator | 2026-02-05 05:22:44.749646 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-05 05:22:44.749653 | orchestrator | Thursday 05 February 2026 05:22:34 +0000 (0:00:00.836) 0:42:19.100 ***** 2026-02-05 05:22:44.749660 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-02-05 05:22:44.749668 | orchestrator | 2026-02-05 05:22:44.749675 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-05 05:22:44.749683 | orchestrator | Thursday 05 February 2026 05:22:35 +0000 (0:00:01.170) 0:42:20.271 ***** 2026-02-05 05:22:44.749690 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-05 05:22:44.749704 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-05 05:22:44.749711 | orchestrator | 2026-02-05 05:22:44.749719 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-05 05:22:44.749726 | orchestrator | Thursday 05 February 2026 05:22:37 +0000 (0:00:01.837) 0:42:22.109 ***** 2026-02-05 05:22:44.749733 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:22:44.749741 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 05:22:44.749748 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 05:22:44.749755 | orchestrator | 2026-02-05 05:22:44.749762 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:22:44.749769 | orchestrator | Thursday 05 February 2026 05:22:40 +0000 (0:00:03.351) 0:42:25.460 ***** 2026-02-05 05:22:44.749777 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-05 05:22:44.749784 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 05:22:44.749791 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:22:44.749799 | orchestrator | 2026-02-05 05:22:44.749806 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-05 05:22:44.749813 | orchestrator | Thursday 05 February 2026 05:22:42 +0000 (0:00:01.646) 0:42:27.107 ***** 2026-02-05 05:22:44.749820 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.749827 | orchestrator | 2026-02-05 05:22:44.749840 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-05 05:22:44.749847 | orchestrator | Thursday 05 February 2026 05:22:43 +0000 (0:00:00.899) 0:42:28.006 ***** 2026-02-05 05:22:44.749854 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.749861 | orchestrator | 2026-02-05 05:22:44.749869 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-05 05:22:44.749876 | orchestrator | Thursday 05 February 2026 05:22:43 +0000 (0:00:00.754) 0:42:28.761 ***** 2026-02-05 05:22:44.749883 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:22:44.749891 | orchestrator | 2026-02-05 05:22:44.749902 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-05 05:23:49.045974 | orchestrator | Thursday 05 February 2026 05:22:44 +0000 (0:00:00.800) 0:42:29.562 ***** 2026-02-05 05:23:49.046122 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-02-05 05:23:49.046138 | orchestrator | 2026-02-05 05:23:49.046148 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-05 05:23:49.046157 | orchestrator | Thursday 05 February 2026 05:22:45 +0000 (0:00:01.106) 0:42:30.668 ***** 2026-02-05 05:23:49.046165 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:23:49.046175 | orchestrator | 2026-02-05 05:23:49.046183 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-05 05:23:49.046191 | orchestrator | Thursday 05 February 2026 05:22:47 +0000 (0:00:01.476) 0:42:32.145 ***** 2026-02-05 05:23:49.046240 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:23:49.046249 | orchestrator | 2026-02-05 05:23:49.046257 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-05 05:23:49.046274 | orchestrator | Thursday 05 February 2026 05:22:50 +0000 (0:00:03.476) 0:42:35.621 ***** 2026-02-05 05:23:49.046282 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-02-05 05:23:49.046290 | orchestrator | 2026-02-05 05:23:49.046297 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-05 05:23:49.046306 | orchestrator | Thursday 05 February 2026 05:22:51 +0000 (0:00:01.119) 0:42:36.740 ***** 2026-02-05 05:23:49.046313 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:23:49.046318 | orchestrator | 2026-02-05 05:23:49.046323 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-05 05:23:49.046329 | orchestrator | Thursday 05 February 2026 05:22:53 +0000 (0:00:01.985) 0:42:38.726 ***** 2026-02-05 05:23:49.046334 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:23:49.046339 | orchestrator | 2026-02-05 05:23:49.046365 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-05 05:23:49.046370 | orchestrator | Thursday 05 February 2026 05:22:55 +0000 (0:00:01.912) 0:42:40.638 ***** 2026-02-05 05:23:49.046375 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:23:49.046380 | orchestrator | 2026-02-05 05:23:49.046385 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-05 05:23:49.046390 | orchestrator | Thursday 05 February 2026 05:22:58 +0000 (0:00:02.248) 0:42:42.886 ***** 2026-02-05 05:23:49.046406 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:23:49.046412 | orchestrator | 2026-02-05 05:23:49.046417 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-05 05:23:49.046422 | orchestrator | Thursday 05 February 2026 05:22:59 +0000 (0:00:01.119) 0:42:44.005 ***** 2026-02-05 05:23:49.046427 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:23:49.046432 | orchestrator | 2026-02-05 05:23:49.046437 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-05 05:23:49.046442 | orchestrator | Thursday 05 February 2026 05:23:00 +0000 (0:00:01.166) 0:42:45.172 ***** 2026-02-05 05:23:49.046448 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-02-05 05:23:49.046453 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-05 05:23:49.046463 | orchestrator | 2026-02-05 05:23:49.046468 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-05 05:23:49.046472 | orchestrator | Thursday 05 February 2026 05:23:02 +0000 (0:00:01.819) 0:42:46.992 ***** 2026-02-05 05:23:49.046477 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-02-05 05:23:49.046482 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-05 05:23:49.046487 | orchestrator | 2026-02-05 05:23:49.046492 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-05 05:23:49.046497 | orchestrator | Thursday 05 February 2026 05:23:05 +0000 (0:00:02.917) 0:42:49.909 ***** 2026-02-05 05:23:49.046502 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-05 05:23:49.046507 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-05 05:23:49.046512 | orchestrator | 2026-02-05 05:23:49.046517 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-05 05:23:49.046522 | orchestrator | Thursday 05 February 2026 05:23:10 +0000 (0:00:05.027) 0:42:54.936 ***** 2026-02-05 05:23:49.046527 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:23:49.046533 | orchestrator | 2026-02-05 05:23:49.046539 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-05 05:23:49.046545 | orchestrator | Thursday 05 February 2026 05:23:11 +0000 (0:00:00.895) 0:42:55.832 ***** 2026-02-05 05:23:49.046551 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:23:49.046557 | orchestrator | 2026-02-05 05:23:49.046563 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-05 05:23:49.046568 | orchestrator | Thursday 05 February 2026 05:23:11 +0000 (0:00:00.878) 0:42:56.711 ***** 2026-02-05 05:23:49.046574 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:23:49.046580 | orchestrator | 2026-02-05 05:23:49.046586 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-05 05:23:49.046592 | orchestrator | Thursday 05 February 2026 05:23:12 +0000 (0:00:00.855) 0:42:57.567 ***** 2026-02-05 05:23:49.046598 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:23:49.046604 | orchestrator | 2026-02-05 05:23:49.046613 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-05 05:23:49.046622 | orchestrator | Thursday 05 February 2026 05:23:13 +0000 (0:00:00.820) 0:42:58.388 ***** 2026-02-05 05:23:49.046630 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:23:49.046638 | orchestrator | 2026-02-05 05:23:49.046646 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-05 05:23:49.046668 | orchestrator | Thursday 05 February 2026 05:23:14 +0000 (0:00:00.760) 0:42:59.148 ***** 2026-02-05 05:23:49.046677 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-05 05:23:49.046693 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-05 05:23:49.046702 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-05 05:23:49.046730 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-05 05:23:49.046739 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:23:49.046748 | orchestrator | 2026-02-05 05:23:49.046756 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-05 05:23:49.046765 | orchestrator | 2026-02-05 05:23:49.046774 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:23:49.046782 | orchestrator | Thursday 05 February 2026 05:23:28 +0000 (0:00:14.519) 0:43:13.668 ***** 2026-02-05 05:23:49.046790 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-05 05:23:49.046799 | orchestrator | 2026-02-05 05:23:49.046808 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 05:23:49.046816 | orchestrator | Thursday 05 February 2026 05:23:29 +0000 (0:00:01.112) 0:43:14.780 ***** 2026-02-05 05:23:49.046825 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:23:49.046831 | orchestrator | 2026-02-05 05:23:49.046837 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 05:23:49.046843 | orchestrator | Thursday 05 February 2026 05:23:31 +0000 (0:00:01.428) 0:43:16.209 ***** 2026-02-05 05:23:49.046848 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:23:49.046854 | orchestrator | 2026-02-05 05:23:49.046860 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:23:49.046865 | orchestrator | Thursday 05 February 2026 05:23:32 +0000 (0:00:01.137) 0:43:17.347 ***** 2026-02-05 05:23:49.046871 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:23:49.046877 | orchestrator | 2026-02-05 05:23:49.046883 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:23:49.046889 | orchestrator | Thursday 05 February 2026 05:23:33 +0000 (0:00:01.433) 0:43:18.780 ***** 2026-02-05 05:23:49.046894 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:23:49.046899 | orchestrator | 2026-02-05 05:23:49.046904 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 05:23:49.046908 | orchestrator | Thursday 05 February 2026 05:23:35 +0000 (0:00:01.155) 0:43:19.936 ***** 2026-02-05 05:23:49.046913 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:23:49.046918 | orchestrator | 2026-02-05 05:23:49.046923 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 05:23:49.046928 | orchestrator | Thursday 05 February 2026 05:23:36 +0000 (0:00:01.168) 0:43:21.104 ***** 2026-02-05 05:23:49.046932 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:23:49.046937 | orchestrator | 2026-02-05 05:23:49.046942 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 05:23:49.046947 | orchestrator | Thursday 05 February 2026 05:23:37 +0000 (0:00:01.131) 0:43:22.236 ***** 2026-02-05 05:23:49.046952 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:23:49.046957 | orchestrator | 2026-02-05 05:23:49.046962 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 05:23:49.046967 | orchestrator | Thursday 05 February 2026 05:23:38 +0000 (0:00:01.162) 0:43:23.398 ***** 2026-02-05 05:23:49.046972 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:23:49.046976 | orchestrator | 2026-02-05 05:23:49.046981 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 05:23:49.046986 | orchestrator | Thursday 05 February 2026 05:23:39 +0000 (0:00:01.102) 0:43:24.501 ***** 2026-02-05 05:23:49.046991 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:23:49.046996 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:23:49.047001 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:23:49.047011 | orchestrator | 2026-02-05 05:23:49.047016 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 05:23:49.047021 | orchestrator | Thursday 05 February 2026 05:23:41 +0000 (0:00:01.627) 0:43:26.129 ***** 2026-02-05 05:23:49.047025 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:23:49.047030 | orchestrator | 2026-02-05 05:23:49.047035 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 05:23:49.047040 | orchestrator | Thursday 05 February 2026 05:23:42 +0000 (0:00:01.218) 0:43:27.347 ***** 2026-02-05 05:23:49.047044 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:23:49.047049 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:23:49.047054 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:23:49.047059 | orchestrator | 2026-02-05 05:23:49.047064 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 05:23:49.047068 | orchestrator | Thursday 05 February 2026 05:23:45 +0000 (0:00:03.227) 0:43:30.574 ***** 2026-02-05 05:23:49.047073 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 05:23:49.047079 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 05:23:49.047084 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 05:23:49.047089 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:23:49.047094 | orchestrator | 2026-02-05 05:23:49.047098 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 05:23:49.047103 | orchestrator | Thursday 05 February 2026 05:23:47 +0000 (0:00:01.411) 0:43:31.986 ***** 2026-02-05 05:23:49.047113 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 05:23:49.047125 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 05:24:09.065477 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 05:24:09.065585 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:09.065601 | orchestrator | 2026-02-05 05:24:09.065613 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 05:24:09.065630 | orchestrator | Thursday 05 February 2026 05:23:49 +0000 (0:00:01.868) 0:43:33.855 ***** 2026-02-05 05:24:09.065648 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:09.065665 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:09.065682 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:09.065724 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:09.065742 | orchestrator | 2026-02-05 05:24:09.065757 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 05:24:09.065771 | orchestrator | Thursday 05 February 2026 05:23:50 +0000 (0:00:01.163) 0:43:35.018 ***** 2026-02-05 05:24:09.065788 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 05:23:43.112320', 'end': '2026-02-05 05:23:43.163131', 'delta': '0:00:00.050811', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 05:24:09.065808 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 05:23:43.732479', 'end': '2026-02-05 05:23:43.771990', 'delta': '0:00:00.039511', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 05:24:09.065863 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9163e99c5c4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 05:23:44.558644', 'end': '2026-02-05 05:23:44.604622', 'delta': '0:00:00.045978', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9163e99c5c4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 05:24:09.065879 | orchestrator | 2026-02-05 05:24:09.065889 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 05:24:09.065898 | orchestrator | Thursday 05 February 2026 05:23:51 +0000 (0:00:01.219) 0:43:36.238 ***** 2026-02-05 05:24:09.065907 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:09.065916 | orchestrator | 2026-02-05 05:24:09.065925 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 05:24:09.065934 | orchestrator | Thursday 05 February 2026 05:23:52 +0000 (0:00:01.552) 0:43:37.790 ***** 2026-02-05 05:24:09.065943 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:09.065951 | orchestrator | 2026-02-05 05:24:09.065960 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 05:24:09.065969 | orchestrator | Thursday 05 February 2026 05:23:54 +0000 (0:00:01.259) 0:43:39.050 ***** 2026-02-05 05:24:09.065978 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:09.065987 | orchestrator | 2026-02-05 05:24:09.065995 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 05:24:09.066004 | orchestrator | Thursday 05 February 2026 05:23:55 +0000 (0:00:01.129) 0:43:40.180 ***** 2026-02-05 05:24:09.066013 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:24:09.066085 | orchestrator | 2026-02-05 05:24:09.066094 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:24:09.066103 | orchestrator | Thursday 05 February 2026 05:23:57 +0000 (0:00:02.058) 0:43:42.238 ***** 2026-02-05 05:24:09.066112 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:09.066120 | orchestrator | 2026-02-05 05:24:09.066129 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 05:24:09.066138 | orchestrator | Thursday 05 February 2026 05:23:58 +0000 (0:00:01.154) 0:43:43.393 ***** 2026-02-05 05:24:09.066147 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:09.066155 | orchestrator | 2026-02-05 05:24:09.066164 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 05:24:09.066173 | orchestrator | Thursday 05 February 2026 05:23:59 +0000 (0:00:01.105) 0:43:44.498 ***** 2026-02-05 05:24:09.066181 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:09.066190 | orchestrator | 2026-02-05 05:24:09.066198 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:24:09.066229 | orchestrator | Thursday 05 February 2026 05:24:00 +0000 (0:00:01.238) 0:43:45.737 ***** 2026-02-05 05:24:09.066238 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:09.066247 | orchestrator | 2026-02-05 05:24:09.066256 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 05:24:09.066265 | orchestrator | Thursday 05 February 2026 05:24:02 +0000 (0:00:01.094) 0:43:46.832 ***** 2026-02-05 05:24:09.066273 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:09.066282 | orchestrator | 2026-02-05 05:24:09.066291 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 05:24:09.066300 | orchestrator | Thursday 05 February 2026 05:24:03 +0000 (0:00:01.078) 0:43:47.910 ***** 2026-02-05 05:24:09.066309 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:09.066317 | orchestrator | 2026-02-05 05:24:09.066326 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 05:24:09.066335 | orchestrator | Thursday 05 February 2026 05:24:04 +0000 (0:00:01.135) 0:43:49.045 ***** 2026-02-05 05:24:09.066344 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:09.066352 | orchestrator | 2026-02-05 05:24:09.066361 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 05:24:09.066370 | orchestrator | Thursday 05 February 2026 05:24:05 +0000 (0:00:01.170) 0:43:50.216 ***** 2026-02-05 05:24:09.066379 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:09.066390 | orchestrator | 2026-02-05 05:24:09.066405 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 05:24:09.066418 | orchestrator | Thursday 05 February 2026 05:24:06 +0000 (0:00:01.179) 0:43:51.396 ***** 2026-02-05 05:24:09.066433 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:09.066449 | orchestrator | 2026-02-05 05:24:09.066464 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 05:24:09.066480 | orchestrator | Thursday 05 February 2026 05:24:07 +0000 (0:00:01.091) 0:43:52.488 ***** 2026-02-05 05:24:09.066493 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:09.066502 | orchestrator | 2026-02-05 05:24:09.066511 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 05:24:09.066520 | orchestrator | Thursday 05 February 2026 05:24:08 +0000 (0:00:01.174) 0:43:53.662 ***** 2026-02-05 05:24:09.066529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:24:09.066553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565', 'dm-uuid-LVM-vN6SqmnZs4OEgki7muUGb3CX2rpgO9JjiNwKDjdU3U6P9o8RLpsOeeot25aaAr4C'], 'uuids': ['85a8f83c-eeb5-49b7-8fd6-02ada4ea1f5a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C']}})  2026-02-05 05:24:09.076017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc', 'scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b9ba281', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:24:09.076087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-s8rEz7-ppR5-3mX9-9SVK-AT2X-wlWd-qt0ARf', 'scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9', 'scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be']}})  2026-02-05 05:24:09.076094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:24:09.076101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:24:09.076106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:24:09.076111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:24:09.076126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS', 'dm-uuid-CRYPT-LUKS2-39f72013c68f483e935747f3038f3162-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:24:09.076154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:24:09.076159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be', 'dm-uuid-LVM-2cW2aDbCF7Qvd1HDyT5MPDeJBzJFIyWajOrxUSy4sPZH0JqYli0dE22RqjUl99AS'], 'uuids': ['39f72013-c68f-483e-9357-47f3038f3162'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS']}})  2026-02-05 05:24:09.076165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-j8R0nG-W0YC-WK20-RGGA-JPgY-3scR-ZQIgrc', 'scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49', 'scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565']}})  2026-02-05 05:24:09.076169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:24:09.076182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62c048b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:24:10.377291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:24:10.377405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:24:10.377420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C', 'dm-uuid-CRYPT-LUKS2-85a8f83ceeb549b78fd602ada4ea1f5a-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:24:10.377432 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:10.377443 | orchestrator | 2026-02-05 05:24:10.377453 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 05:24:10.377477 | orchestrator | Thursday 05 February 2026 05:24:10 +0000 (0:00:01.305) 0:43:54.968 ***** 2026-02-05 05:24:10.378126 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:10.378143 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565', 'dm-uuid-LVM-vN6SqmnZs4OEgki7muUGb3CX2rpgO9JjiNwKDjdU3U6P9o8RLpsOeeot25aaAr4C'], 'uuids': ['85a8f83c-eeb5-49b7-8fd6-02ada4ea1f5a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:10.378181 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc', 'scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b9ba281', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:10.378243 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-s8rEz7-ppR5-3mX9-9SVK-AT2X-wlWd-qt0ARf', 'scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9', 'scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:10.378255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:10.378353 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:10.378370 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:10.378390 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:10.378407 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS', 'dm-uuid-CRYPT-LUKS2-39f72013c68f483e935747f3038f3162-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:15.541046 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:15.541161 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be', 'dm-uuid-LVM-2cW2aDbCF7Qvd1HDyT5MPDeJBzJFIyWajOrxUSy4sPZH0JqYli0dE22RqjUl99AS'], 'uuids': ['39f72013-c68f-483e-9357-47f3038f3162'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:15.541179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-j8R0nG-W0YC-WK20-RGGA-JPgY-3scR-ZQIgrc', 'scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49', 'scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:15.541245 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:15.541296 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62c048b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:15.541312 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:15.541325 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:15.541354 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C', 'dm-uuid-CRYPT-LUKS2-85a8f83ceeb549b78fd602ada4ea1f5a-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:24:15.541367 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:15.541380 | orchestrator | 2026-02-05 05:24:15.541392 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 05:24:15.541405 | orchestrator | Thursday 05 February 2026 05:24:11 +0000 (0:00:01.389) 0:43:56.358 ***** 2026-02-05 05:24:15.541417 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:15.541428 | orchestrator | 2026-02-05 05:24:15.541440 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 05:24:15.541451 | orchestrator | Thursday 05 February 2026 05:24:12 +0000 (0:00:01.443) 0:43:57.801 ***** 2026-02-05 05:24:15.541462 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:15.541472 | orchestrator | 2026-02-05 05:24:15.541483 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:24:15.541494 | orchestrator | Thursday 05 February 2026 05:24:14 +0000 (0:00:01.115) 0:43:58.917 ***** 2026-02-05 05:24:15.541505 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:15.541516 | orchestrator | 2026-02-05 05:24:15.541527 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:24:15.541545 | orchestrator | Thursday 05 February 2026 05:24:15 +0000 (0:00:01.438) 0:44:00.355 ***** 2026-02-05 05:24:56.583563 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.583659 | orchestrator | 2026-02-05 05:24:56.583670 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:24:56.583680 | orchestrator | Thursday 05 February 2026 05:24:16 +0000 (0:00:01.080) 0:44:01.435 ***** 2026-02-05 05:24:56.583688 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.583695 | orchestrator | 2026-02-05 05:24:56.583702 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:24:56.583710 | orchestrator | Thursday 05 February 2026 05:24:17 +0000 (0:00:01.217) 0:44:02.653 ***** 2026-02-05 05:24:56.583718 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.583725 | orchestrator | 2026-02-05 05:24:56.583733 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 05:24:56.583740 | orchestrator | Thursday 05 February 2026 05:24:18 +0000 (0:00:01.115) 0:44:03.768 ***** 2026-02-05 05:24:56.583748 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-05 05:24:56.583756 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-05 05:24:56.583763 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-05 05:24:56.583770 | orchestrator | 2026-02-05 05:24:56.583778 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 05:24:56.583785 | orchestrator | Thursday 05 February 2026 05:24:20 +0000 (0:00:01.938) 0:44:05.706 ***** 2026-02-05 05:24:56.583792 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 05:24:56.583801 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 05:24:56.583809 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 05:24:56.583836 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.583844 | orchestrator | 2026-02-05 05:24:56.583851 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 05:24:56.583858 | orchestrator | Thursday 05 February 2026 05:24:22 +0000 (0:00:01.139) 0:44:06.846 ***** 2026-02-05 05:24:56.583865 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-05 05:24:56.583874 | orchestrator | 2026-02-05 05:24:56.583882 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:24:56.583890 | orchestrator | Thursday 05 February 2026 05:24:23 +0000 (0:00:01.097) 0:44:07.944 ***** 2026-02-05 05:24:56.583897 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.583903 | orchestrator | 2026-02-05 05:24:56.583911 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:24:56.583917 | orchestrator | Thursday 05 February 2026 05:24:24 +0000 (0:00:01.157) 0:44:09.102 ***** 2026-02-05 05:24:56.583924 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.583931 | orchestrator | 2026-02-05 05:24:56.583938 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:24:56.583945 | orchestrator | Thursday 05 February 2026 05:24:25 +0000 (0:00:01.173) 0:44:10.276 ***** 2026-02-05 05:24:56.583952 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.583959 | orchestrator | 2026-02-05 05:24:56.583967 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:24:56.583973 | orchestrator | Thursday 05 February 2026 05:24:26 +0000 (0:00:01.121) 0:44:11.397 ***** 2026-02-05 05:24:56.583981 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:56.583989 | orchestrator | 2026-02-05 05:24:56.583996 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:24:56.584003 | orchestrator | Thursday 05 February 2026 05:24:27 +0000 (0:00:01.195) 0:44:12.593 ***** 2026-02-05 05:24:56.584010 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:24:56.584018 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:24:56.584025 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:24:56.584032 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.584039 | orchestrator | 2026-02-05 05:24:56.584046 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:24:56.584053 | orchestrator | Thursday 05 February 2026 05:24:29 +0000 (0:00:01.383) 0:44:13.976 ***** 2026-02-05 05:24:56.584061 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:24:56.584068 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:24:56.584076 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:24:56.584083 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.584089 | orchestrator | 2026-02-05 05:24:56.584096 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:24:56.584116 | orchestrator | Thursday 05 February 2026 05:24:30 +0000 (0:00:01.392) 0:44:15.369 ***** 2026-02-05 05:24:56.584124 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:24:56.584131 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:24:56.584139 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:24:56.584146 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.584154 | orchestrator | 2026-02-05 05:24:56.584162 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:24:56.584170 | orchestrator | Thursday 05 February 2026 05:24:31 +0000 (0:00:01.439) 0:44:16.808 ***** 2026-02-05 05:24:56.584176 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:56.584184 | orchestrator | 2026-02-05 05:24:56.584191 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:24:56.584198 | orchestrator | Thursday 05 February 2026 05:24:33 +0000 (0:00:01.146) 0:44:17.954 ***** 2026-02-05 05:24:56.584214 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 05:24:56.584222 | orchestrator | 2026-02-05 05:24:56.584253 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 05:24:56.584261 | orchestrator | Thursday 05 February 2026 05:24:34 +0000 (0:00:01.296) 0:44:19.251 ***** 2026-02-05 05:24:56.584286 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:24:56.584294 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:24:56.584301 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:24:56.584308 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:24:56.584315 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:24:56.584323 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-05 05:24:56.584331 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:24:56.584340 | orchestrator | 2026-02-05 05:24:56.584346 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 05:24:56.584353 | orchestrator | Thursday 05 February 2026 05:24:36 +0000 (0:00:02.189) 0:44:21.441 ***** 2026-02-05 05:24:56.584359 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:24:56.584366 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:24:56.584373 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:24:56.584380 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:24:56.584387 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:24:56.584393 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-05 05:24:56.584401 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:24:56.584408 | orchestrator | 2026-02-05 05:24:56.584414 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-05 05:24:56.584421 | orchestrator | Thursday 05 February 2026 05:24:38 +0000 (0:00:02.155) 0:44:23.596 ***** 2026-02-05 05:24:56.584427 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:56.584434 | orchestrator | 2026-02-05 05:24:56.584441 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-05 05:24:56.584448 | orchestrator | Thursday 05 February 2026 05:24:39 +0000 (0:00:01.138) 0:44:24.734 ***** 2026-02-05 05:24:56.584455 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:56.584461 | orchestrator | 2026-02-05 05:24:56.584468 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-05 05:24:56.584475 | orchestrator | Thursday 05 February 2026 05:24:40 +0000 (0:00:00.804) 0:44:25.539 ***** 2026-02-05 05:24:56.584481 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:56.584488 | orchestrator | 2026-02-05 05:24:56.584495 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-05 05:24:56.584502 | orchestrator | Thursday 05 February 2026 05:24:41 +0000 (0:00:00.871) 0:44:26.410 ***** 2026-02-05 05:24:56.584509 | orchestrator | changed: [testbed-node-5] => (item=0) 2026-02-05 05:24:56.584517 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-05 05:24:56.584524 | orchestrator | 2026-02-05 05:24:56.584531 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:24:56.584538 | orchestrator | Thursday 05 February 2026 05:24:45 +0000 (0:00:03.814) 0:44:30.225 ***** 2026-02-05 05:24:56.584545 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-05 05:24:56.584553 | orchestrator | 2026-02-05 05:24:56.584559 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 05:24:56.584567 | orchestrator | Thursday 05 February 2026 05:24:46 +0000 (0:00:01.126) 0:44:31.352 ***** 2026-02-05 05:24:56.584584 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-05 05:24:56.584592 | orchestrator | 2026-02-05 05:24:56.584599 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 05:24:56.584606 | orchestrator | Thursday 05 February 2026 05:24:47 +0000 (0:00:01.118) 0:44:32.470 ***** 2026-02-05 05:24:56.584613 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.584620 | orchestrator | 2026-02-05 05:24:56.584628 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 05:24:56.584635 | orchestrator | Thursday 05 February 2026 05:24:48 +0000 (0:00:01.132) 0:44:33.603 ***** 2026-02-05 05:24:56.584643 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:56.584650 | orchestrator | 2026-02-05 05:24:56.584656 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 05:24:56.584671 | orchestrator | Thursday 05 February 2026 05:24:50 +0000 (0:00:01.518) 0:44:35.122 ***** 2026-02-05 05:24:56.584679 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:56.584686 | orchestrator | 2026-02-05 05:24:56.584694 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 05:24:56.584701 | orchestrator | Thursday 05 February 2026 05:24:51 +0000 (0:00:01.506) 0:44:36.628 ***** 2026-02-05 05:24:56.584708 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:24:56.584716 | orchestrator | 2026-02-05 05:24:56.584723 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 05:24:56.584730 | orchestrator | Thursday 05 February 2026 05:24:53 +0000 (0:00:01.516) 0:44:38.144 ***** 2026-02-05 05:24:56.584738 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.584745 | orchestrator | 2026-02-05 05:24:56.584753 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 05:24:56.584760 | orchestrator | Thursday 05 February 2026 05:24:54 +0000 (0:00:01.106) 0:44:39.251 ***** 2026-02-05 05:24:56.584767 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.584775 | orchestrator | 2026-02-05 05:24:56.584782 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 05:24:56.584789 | orchestrator | Thursday 05 February 2026 05:24:55 +0000 (0:00:01.078) 0:44:40.330 ***** 2026-02-05 05:24:56.584795 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:24:56.584802 | orchestrator | 2026-02-05 05:24:56.584819 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 05:25:36.485415 | orchestrator | Thursday 05 February 2026 05:24:56 +0000 (0:00:01.060) 0:44:41.390 ***** 2026-02-05 05:25:36.485525 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:25:36.485543 | orchestrator | 2026-02-05 05:25:36.485556 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 05:25:36.485567 | orchestrator | Thursday 05 February 2026 05:24:58 +0000 (0:00:01.471) 0:44:42.862 ***** 2026-02-05 05:25:36.485577 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:25:36.485588 | orchestrator | 2026-02-05 05:25:36.485599 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 05:25:36.485610 | orchestrator | Thursday 05 February 2026 05:24:59 +0000 (0:00:01.491) 0:44:44.354 ***** 2026-02-05 05:25:36.485622 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.485634 | orchestrator | 2026-02-05 05:25:36.485644 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:25:36.485655 | orchestrator | Thursday 05 February 2026 05:25:00 +0000 (0:00:00.725) 0:44:45.080 ***** 2026-02-05 05:25:36.485665 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.485675 | orchestrator | 2026-02-05 05:25:36.485687 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:25:36.485698 | orchestrator | Thursday 05 February 2026 05:25:00 +0000 (0:00:00.722) 0:44:45.802 ***** 2026-02-05 05:25:36.485709 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:25:36.485720 | orchestrator | 2026-02-05 05:25:36.485730 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:25:36.485765 | orchestrator | Thursday 05 February 2026 05:25:01 +0000 (0:00:00.740) 0:44:46.543 ***** 2026-02-05 05:25:36.485775 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:25:36.485785 | orchestrator | 2026-02-05 05:25:36.485795 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:25:36.485806 | orchestrator | Thursday 05 February 2026 05:25:02 +0000 (0:00:00.762) 0:44:47.306 ***** 2026-02-05 05:25:36.485815 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:25:36.485824 | orchestrator | 2026-02-05 05:25:36.485834 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:25:36.485844 | orchestrator | Thursday 05 February 2026 05:25:03 +0000 (0:00:00.747) 0:44:48.054 ***** 2026-02-05 05:25:36.485854 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.485864 | orchestrator | 2026-02-05 05:25:36.485874 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:25:36.485885 | orchestrator | Thursday 05 February 2026 05:25:04 +0000 (0:00:00.773) 0:44:48.827 ***** 2026-02-05 05:25:36.485895 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.485906 | orchestrator | 2026-02-05 05:25:36.485917 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:25:36.485928 | orchestrator | Thursday 05 February 2026 05:25:04 +0000 (0:00:00.724) 0:44:49.551 ***** 2026-02-05 05:25:36.485938 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.485945 | orchestrator | 2026-02-05 05:25:36.485951 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:25:36.485957 | orchestrator | Thursday 05 February 2026 05:25:05 +0000 (0:00:00.806) 0:44:50.357 ***** 2026-02-05 05:25:36.485964 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:25:36.485970 | orchestrator | 2026-02-05 05:25:36.485976 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:25:36.485983 | orchestrator | Thursday 05 February 2026 05:25:06 +0000 (0:00:00.816) 0:44:51.174 ***** 2026-02-05 05:25:36.485989 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:25:36.485995 | orchestrator | 2026-02-05 05:25:36.486002 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:25:36.486008 | orchestrator | Thursday 05 February 2026 05:25:07 +0000 (0:00:00.841) 0:44:52.016 ***** 2026-02-05 05:25:36.486014 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486045 | orchestrator | 2026-02-05 05:25:36.486051 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:25:36.486057 | orchestrator | Thursday 05 February 2026 05:25:07 +0000 (0:00:00.744) 0:44:52.760 ***** 2026-02-05 05:25:36.486063 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486070 | orchestrator | 2026-02-05 05:25:36.486076 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:25:36.486083 | orchestrator | Thursday 05 February 2026 05:25:08 +0000 (0:00:00.747) 0:44:53.508 ***** 2026-02-05 05:25:36.486090 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486096 | orchestrator | 2026-02-05 05:25:36.486103 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:25:36.486109 | orchestrator | Thursday 05 February 2026 05:25:09 +0000 (0:00:00.752) 0:44:54.260 ***** 2026-02-05 05:25:36.486116 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486122 | orchestrator | 2026-02-05 05:25:36.486128 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:25:36.486146 | orchestrator | Thursday 05 February 2026 05:25:10 +0000 (0:00:00.726) 0:44:54.987 ***** 2026-02-05 05:25:36.486152 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486158 | orchestrator | 2026-02-05 05:25:36.486165 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:25:36.486171 | orchestrator | Thursday 05 February 2026 05:25:10 +0000 (0:00:00.767) 0:44:55.755 ***** 2026-02-05 05:25:36.486177 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486183 | orchestrator | 2026-02-05 05:25:36.486189 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:25:36.486204 | orchestrator | Thursday 05 February 2026 05:25:11 +0000 (0:00:00.762) 0:44:56.518 ***** 2026-02-05 05:25:36.486210 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486216 | orchestrator | 2026-02-05 05:25:36.486222 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:25:36.486229 | orchestrator | Thursday 05 February 2026 05:25:12 +0000 (0:00:00.741) 0:44:57.259 ***** 2026-02-05 05:25:36.486236 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486242 | orchestrator | 2026-02-05 05:25:36.486271 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:25:36.486283 | orchestrator | Thursday 05 February 2026 05:25:13 +0000 (0:00:00.804) 0:44:58.064 ***** 2026-02-05 05:25:36.486310 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486317 | orchestrator | 2026-02-05 05:25:36.486323 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:25:36.486329 | orchestrator | Thursday 05 February 2026 05:25:14 +0000 (0:00:00.765) 0:44:58.830 ***** 2026-02-05 05:25:36.486335 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486344 | orchestrator | 2026-02-05 05:25:36.486354 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:25:36.486364 | orchestrator | Thursday 05 February 2026 05:25:14 +0000 (0:00:00.771) 0:44:59.601 ***** 2026-02-05 05:25:36.486373 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486383 | orchestrator | 2026-02-05 05:25:36.486392 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:25:36.486403 | orchestrator | Thursday 05 February 2026 05:25:15 +0000 (0:00:00.737) 0:45:00.339 ***** 2026-02-05 05:25:36.486413 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486423 | orchestrator | 2026-02-05 05:25:36.486432 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:25:36.486442 | orchestrator | Thursday 05 February 2026 05:25:16 +0000 (0:00:00.765) 0:45:01.104 ***** 2026-02-05 05:25:36.486452 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:25:36.486462 | orchestrator | 2026-02-05 05:25:36.486472 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:25:36.486482 | orchestrator | Thursday 05 February 2026 05:25:17 +0000 (0:00:01.644) 0:45:02.749 ***** 2026-02-05 05:25:36.486491 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:25:36.486501 | orchestrator | 2026-02-05 05:25:36.486511 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:25:36.486520 | orchestrator | Thursday 05 February 2026 05:25:19 +0000 (0:00:01.909) 0:45:04.658 ***** 2026-02-05 05:25:36.486531 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-05 05:25:36.486541 | orchestrator | 2026-02-05 05:25:36.486552 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 05:25:36.486562 | orchestrator | Thursday 05 February 2026 05:25:20 +0000 (0:00:01.112) 0:45:05.771 ***** 2026-02-05 05:25:36.486572 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486583 | orchestrator | 2026-02-05 05:25:36.486593 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 05:25:36.486604 | orchestrator | Thursday 05 February 2026 05:25:22 +0000 (0:00:01.135) 0:45:06.906 ***** 2026-02-05 05:25:36.486615 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486624 | orchestrator | 2026-02-05 05:25:36.486634 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 05:25:36.486643 | orchestrator | Thursday 05 February 2026 05:25:23 +0000 (0:00:01.115) 0:45:08.021 ***** 2026-02-05 05:25:36.486653 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 05:25:36.486663 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 05:25:36.486675 | orchestrator | 2026-02-05 05:25:36.486686 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 05:25:36.486696 | orchestrator | Thursday 05 February 2026 05:25:24 +0000 (0:00:01.795) 0:45:09.817 ***** 2026-02-05 05:25:36.486717 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:25:36.486727 | orchestrator | 2026-02-05 05:25:36.486737 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 05:25:36.486747 | orchestrator | Thursday 05 February 2026 05:25:26 +0000 (0:00:01.447) 0:45:11.265 ***** 2026-02-05 05:25:36.486758 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486768 | orchestrator | 2026-02-05 05:25:36.486779 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 05:25:36.486785 | orchestrator | Thursday 05 February 2026 05:25:27 +0000 (0:00:01.140) 0:45:12.406 ***** 2026-02-05 05:25:36.486791 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486797 | orchestrator | 2026-02-05 05:25:36.486804 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:25:36.486810 | orchestrator | Thursday 05 February 2026 05:25:28 +0000 (0:00:00.805) 0:45:13.211 ***** 2026-02-05 05:25:36.486816 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486822 | orchestrator | 2026-02-05 05:25:36.486828 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:25:36.486834 | orchestrator | Thursday 05 February 2026 05:25:29 +0000 (0:00:00.753) 0:45:13.965 ***** 2026-02-05 05:25:36.486841 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-05 05:25:36.486847 | orchestrator | 2026-02-05 05:25:36.486853 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 05:25:36.486866 | orchestrator | Thursday 05 February 2026 05:25:30 +0000 (0:00:01.084) 0:45:15.049 ***** 2026-02-05 05:25:36.486872 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:25:36.486878 | orchestrator | 2026-02-05 05:25:36.486885 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 05:25:36.486891 | orchestrator | Thursday 05 February 2026 05:25:32 +0000 (0:00:02.737) 0:45:17.787 ***** 2026-02-05 05:25:36.486897 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 05:25:36.486904 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 05:25:36.486910 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 05:25:36.486916 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486922 | orchestrator | 2026-02-05 05:25:36.486928 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 05:25:36.486934 | orchestrator | Thursday 05 February 2026 05:25:34 +0000 (0:00:01.170) 0:45:18.957 ***** 2026-02-05 05:25:36.486941 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:25:36.486947 | orchestrator | 2026-02-05 05:25:36.486953 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 05:25:36.486959 | orchestrator | Thursday 05 February 2026 05:25:35 +0000 (0:00:01.137) 0:45:20.095 ***** 2026-02-05 05:25:36.486974 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.015242 | orchestrator | 2026-02-05 05:26:19.015422 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 05:26:19.015456 | orchestrator | Thursday 05 February 2026 05:25:36 +0000 (0:00:01.202) 0:45:21.297 ***** 2026-02-05 05:26:19.015476 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.015497 | orchestrator | 2026-02-05 05:26:19.015516 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 05:26:19.015534 | orchestrator | Thursday 05 February 2026 05:25:37 +0000 (0:00:01.142) 0:45:22.440 ***** 2026-02-05 05:26:19.015554 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.015574 | orchestrator | 2026-02-05 05:26:19.015593 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 05:26:19.015612 | orchestrator | Thursday 05 February 2026 05:25:38 +0000 (0:00:01.150) 0:45:23.590 ***** 2026-02-05 05:26:19.015630 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.015648 | orchestrator | 2026-02-05 05:26:19.015665 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:26:19.015721 | orchestrator | Thursday 05 February 2026 05:25:39 +0000 (0:00:00.786) 0:45:24.376 ***** 2026-02-05 05:26:19.015739 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:26:19.015761 | orchestrator | 2026-02-05 05:26:19.015781 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:26:19.015801 | orchestrator | Thursday 05 February 2026 05:25:41 +0000 (0:00:02.128) 0:45:26.505 ***** 2026-02-05 05:26:19.015822 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:26:19.015843 | orchestrator | 2026-02-05 05:26:19.015863 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:26:19.015881 | orchestrator | Thursday 05 February 2026 05:25:42 +0000 (0:00:00.774) 0:45:27.280 ***** 2026-02-05 05:26:19.015900 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-05 05:26:19.015919 | orchestrator | 2026-02-05 05:26:19.015938 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 05:26:19.015958 | orchestrator | Thursday 05 February 2026 05:25:43 +0000 (0:00:01.109) 0:45:28.389 ***** 2026-02-05 05:26:19.015977 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.015994 | orchestrator | 2026-02-05 05:26:19.016013 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 05:26:19.016030 | orchestrator | Thursday 05 February 2026 05:25:44 +0000 (0:00:01.137) 0:45:29.526 ***** 2026-02-05 05:26:19.016048 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.016065 | orchestrator | 2026-02-05 05:26:19.016083 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 05:26:19.016100 | orchestrator | Thursday 05 February 2026 05:25:45 +0000 (0:00:01.199) 0:45:30.726 ***** 2026-02-05 05:26:19.016118 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.016139 | orchestrator | 2026-02-05 05:26:19.016158 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 05:26:19.016178 | orchestrator | Thursday 05 February 2026 05:25:47 +0000 (0:00:01.165) 0:45:31.891 ***** 2026-02-05 05:26:19.016197 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.016216 | orchestrator | 2026-02-05 05:26:19.016233 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 05:26:19.016252 | orchestrator | Thursday 05 February 2026 05:25:48 +0000 (0:00:01.119) 0:45:33.011 ***** 2026-02-05 05:26:19.016350 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.016366 | orchestrator | 2026-02-05 05:26:19.016378 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 05:26:19.016390 | orchestrator | Thursday 05 February 2026 05:25:49 +0000 (0:00:01.174) 0:45:34.185 ***** 2026-02-05 05:26:19.016401 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.016413 | orchestrator | 2026-02-05 05:26:19.016425 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 05:26:19.016436 | orchestrator | Thursday 05 February 2026 05:25:50 +0000 (0:00:01.132) 0:45:35.317 ***** 2026-02-05 05:26:19.016448 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.016460 | orchestrator | 2026-02-05 05:26:19.016472 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 05:26:19.016483 | orchestrator | Thursday 05 February 2026 05:25:51 +0000 (0:00:01.130) 0:45:36.448 ***** 2026-02-05 05:26:19.016495 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.016506 | orchestrator | 2026-02-05 05:26:19.016518 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 05:26:19.016530 | orchestrator | Thursday 05 February 2026 05:25:52 +0000 (0:00:01.138) 0:45:37.586 ***** 2026-02-05 05:26:19.016541 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:26:19.016553 | orchestrator | 2026-02-05 05:26:19.016565 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:26:19.016595 | orchestrator | Thursday 05 February 2026 05:25:53 +0000 (0:00:00.799) 0:45:38.385 ***** 2026-02-05 05:26:19.016608 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-05 05:26:19.016634 | orchestrator | 2026-02-05 05:26:19.016646 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 05:26:19.016657 | orchestrator | Thursday 05 February 2026 05:25:54 +0000 (0:00:01.092) 0:45:39.478 ***** 2026-02-05 05:26:19.016669 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-05 05:26:19.016681 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-05 05:26:19.016693 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-05 05:26:19.016704 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-05 05:26:19.016716 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-05 05:26:19.016728 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-05 05:26:19.016739 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-05 05:26:19.016751 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-05 05:26:19.016764 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 05:26:19.016803 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 05:26:19.016814 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 05:26:19.016824 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 05:26:19.016835 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 05:26:19.016845 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 05:26:19.016855 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-05 05:26:19.016866 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-05 05:26:19.016876 | orchestrator | 2026-02-05 05:26:19.016887 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:26:19.016897 | orchestrator | Thursday 05 February 2026 05:26:00 +0000 (0:00:06.319) 0:45:45.798 ***** 2026-02-05 05:26:19.016907 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-05 05:26:19.016918 | orchestrator | 2026-02-05 05:26:19.016928 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-05 05:26:19.016938 | orchestrator | Thursday 05 February 2026 05:26:02 +0000 (0:00:01.088) 0:45:46.886 ***** 2026-02-05 05:26:19.016949 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:26:19.016961 | orchestrator | 2026-02-05 05:26:19.016971 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-05 05:26:19.016981 | orchestrator | Thursday 05 February 2026 05:26:03 +0000 (0:00:01.494) 0:45:48.381 ***** 2026-02-05 05:26:19.016991 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:26:19.017002 | orchestrator | 2026-02-05 05:26:19.017012 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:26:19.017022 | orchestrator | Thursday 05 February 2026 05:26:05 +0000 (0:00:01.633) 0:45:50.015 ***** 2026-02-05 05:26:19.017033 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.017043 | orchestrator | 2026-02-05 05:26:19.017054 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:26:19.017064 | orchestrator | Thursday 05 February 2026 05:26:06 +0000 (0:00:00.835) 0:45:50.851 ***** 2026-02-05 05:26:19.017074 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.017085 | orchestrator | 2026-02-05 05:26:19.017095 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:26:19.017105 | orchestrator | Thursday 05 February 2026 05:26:06 +0000 (0:00:00.797) 0:45:51.648 ***** 2026-02-05 05:26:19.017116 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.017126 | orchestrator | 2026-02-05 05:26:19.017137 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:26:19.017154 | orchestrator | Thursday 05 February 2026 05:26:07 +0000 (0:00:00.770) 0:45:52.419 ***** 2026-02-05 05:26:19.017164 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.017175 | orchestrator | 2026-02-05 05:26:19.017185 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:26:19.017196 | orchestrator | Thursday 05 February 2026 05:26:08 +0000 (0:00:00.784) 0:45:53.204 ***** 2026-02-05 05:26:19.017206 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.017216 | orchestrator | 2026-02-05 05:26:19.017227 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:26:19.017237 | orchestrator | Thursday 05 February 2026 05:26:09 +0000 (0:00:00.770) 0:45:53.974 ***** 2026-02-05 05:26:19.017248 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.017258 | orchestrator | 2026-02-05 05:26:19.017320 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:26:19.017332 | orchestrator | Thursday 05 February 2026 05:26:09 +0000 (0:00:00.774) 0:45:54.749 ***** 2026-02-05 05:26:19.017342 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.017352 | orchestrator | 2026-02-05 05:26:19.017362 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:26:19.017372 | orchestrator | Thursday 05 February 2026 05:26:10 +0000 (0:00:00.795) 0:45:55.545 ***** 2026-02-05 05:26:19.017381 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.017391 | orchestrator | 2026-02-05 05:26:19.017401 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:26:19.017411 | orchestrator | Thursday 05 February 2026 05:26:11 +0000 (0:00:00.767) 0:45:56.313 ***** 2026-02-05 05:26:19.017426 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.017436 | orchestrator | 2026-02-05 05:26:19.017446 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:26:19.017455 | orchestrator | Thursday 05 February 2026 05:26:12 +0000 (0:00:00.776) 0:45:57.089 ***** 2026-02-05 05:26:19.017465 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:19.017475 | orchestrator | 2026-02-05 05:26:19.017484 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:26:19.017494 | orchestrator | Thursday 05 February 2026 05:26:13 +0000 (0:00:00.781) 0:45:57.871 ***** 2026-02-05 05:26:19.017504 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:26:19.017514 | orchestrator | 2026-02-05 05:26:19.017523 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:26:19.017533 | orchestrator | Thursday 05 February 2026 05:26:13 +0000 (0:00:00.822) 0:45:58.693 ***** 2026-02-05 05:26:19.017543 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-05 05:26:19.017552 | orchestrator | 2026-02-05 05:26:19.017562 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:26:19.017572 | orchestrator | Thursday 05 February 2026 05:26:18 +0000 (0:00:04.271) 0:46:02.964 ***** 2026-02-05 05:26:19.017588 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:26:59.859210 | orchestrator | 2026-02-05 05:26:59.859343 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:26:59.859361 | orchestrator | Thursday 05 February 2026 05:26:19 +0000 (0:00:00.863) 0:46:03.827 ***** 2026-02-05 05:26:59.859375 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-05 05:26:59.859389 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-05 05:26:59.859426 | orchestrator | 2026-02-05 05:26:59.859438 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:26:59.859449 | orchestrator | Thursday 05 February 2026 05:26:26 +0000 (0:00:07.384) 0:46:11.212 ***** 2026-02-05 05:26:59.859460 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.859471 | orchestrator | 2026-02-05 05:26:59.859482 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:26:59.859492 | orchestrator | Thursday 05 February 2026 05:26:27 +0000 (0:00:00.784) 0:46:11.997 ***** 2026-02-05 05:26:59.859503 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.859513 | orchestrator | 2026-02-05 05:26:59.859523 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:26:59.859536 | orchestrator | Thursday 05 February 2026 05:26:27 +0000 (0:00:00.744) 0:46:12.742 ***** 2026-02-05 05:26:59.859547 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.859557 | orchestrator | 2026-02-05 05:26:59.859567 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:26:59.859577 | orchestrator | Thursday 05 February 2026 05:26:28 +0000 (0:00:00.782) 0:46:13.524 ***** 2026-02-05 05:26:59.859587 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.859598 | orchestrator | 2026-02-05 05:26:59.859607 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:26:59.859618 | orchestrator | Thursday 05 February 2026 05:26:29 +0000 (0:00:00.779) 0:46:14.304 ***** 2026-02-05 05:26:59.859629 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.859640 | orchestrator | 2026-02-05 05:26:59.859650 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:26:59.859660 | orchestrator | Thursday 05 February 2026 05:26:30 +0000 (0:00:00.802) 0:46:15.106 ***** 2026-02-05 05:26:59.859671 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:26:59.859682 | orchestrator | 2026-02-05 05:26:59.859692 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:26:59.859702 | orchestrator | Thursday 05 February 2026 05:26:31 +0000 (0:00:00.866) 0:46:15.973 ***** 2026-02-05 05:26:59.859712 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:26:59.859724 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:26:59.859734 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:26:59.859745 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.859755 | orchestrator | 2026-02-05 05:26:59.859766 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:26:59.859777 | orchestrator | Thursday 05 February 2026 05:26:32 +0000 (0:00:01.036) 0:46:17.009 ***** 2026-02-05 05:26:59.859787 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:26:59.859798 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:26:59.859809 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:26:59.859820 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.859829 | orchestrator | 2026-02-05 05:26:59.859840 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:26:59.859851 | orchestrator | Thursday 05 February 2026 05:26:33 +0000 (0:00:01.066) 0:46:18.076 ***** 2026-02-05 05:26:59.859861 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:26:59.859887 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:26:59.859898 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:26:59.859909 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.859920 | orchestrator | 2026-02-05 05:26:59.859931 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:26:59.859942 | orchestrator | Thursday 05 February 2026 05:26:34 +0000 (0:00:01.020) 0:46:19.097 ***** 2026-02-05 05:26:59.859960 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:26:59.859970 | orchestrator | 2026-02-05 05:26:59.859998 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:26:59.860009 | orchestrator | Thursday 05 February 2026 05:26:35 +0000 (0:00:00.804) 0:46:19.901 ***** 2026-02-05 05:26:59.860019 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 05:26:59.860029 | orchestrator | 2026-02-05 05:26:59.860037 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:26:59.860045 | orchestrator | Thursday 05 February 2026 05:26:36 +0000 (0:00:01.430) 0:46:21.332 ***** 2026-02-05 05:26:59.860052 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:26:59.860060 | orchestrator | 2026-02-05 05:26:59.860066 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-05 05:26:59.860074 | orchestrator | Thursday 05 February 2026 05:26:37 +0000 (0:00:01.388) 0:46:22.721 ***** 2026-02-05 05:26:59.860081 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:26:59.860089 | orchestrator | 2026-02-05 05:26:59.860114 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-05 05:26:59.860126 | orchestrator | Thursday 05 February 2026 05:26:38 +0000 (0:00:00.776) 0:46:23.497 ***** 2026-02-05 05:26:59.860136 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:26:59.860149 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:26:59.860164 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:26:59.860173 | orchestrator | 2026-02-05 05:26:59.860183 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-05 05:26:59.860193 | orchestrator | Thursday 05 February 2026 05:26:39 +0000 (0:00:01.276) 0:46:24.773 ***** 2026-02-05 05:26:59.860202 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-02-05 05:26:59.860212 | orchestrator | 2026-02-05 05:26:59.860221 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-05 05:26:59.860231 | orchestrator | Thursday 05 February 2026 05:26:41 +0000 (0:00:01.136) 0:46:25.910 ***** 2026-02-05 05:26:59.860241 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.860251 | orchestrator | 2026-02-05 05:26:59.860263 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-05 05:26:59.860270 | orchestrator | Thursday 05 February 2026 05:26:42 +0000 (0:00:01.109) 0:46:27.020 ***** 2026-02-05 05:26:59.860276 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.860282 | orchestrator | 2026-02-05 05:26:59.860331 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-05 05:26:59.860338 | orchestrator | Thursday 05 February 2026 05:26:43 +0000 (0:00:01.160) 0:46:28.181 ***** 2026-02-05 05:26:59.860344 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:26:59.860353 | orchestrator | 2026-02-05 05:26:59.860364 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-05 05:26:59.860374 | orchestrator | Thursday 05 February 2026 05:26:44 +0000 (0:00:01.502) 0:46:29.683 ***** 2026-02-05 05:26:59.860384 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:26:59.860394 | orchestrator | 2026-02-05 05:26:59.860404 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-05 05:26:59.860414 | orchestrator | Thursday 05 February 2026 05:26:46 +0000 (0:00:01.162) 0:46:30.845 ***** 2026-02-05 05:26:59.860423 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-05 05:26:59.860433 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-05 05:26:59.860443 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-05 05:26:59.860454 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-05 05:26:59.860464 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-05 05:26:59.860483 | orchestrator | 2026-02-05 05:26:59.860493 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-05 05:26:59.860504 | orchestrator | Thursday 05 February 2026 05:26:48 +0000 (0:00:02.507) 0:46:33.353 ***** 2026-02-05 05:26:59.860514 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.860525 | orchestrator | 2026-02-05 05:26:59.860531 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-05 05:26:59.860538 | orchestrator | Thursday 05 February 2026 05:26:49 +0000 (0:00:00.755) 0:46:34.109 ***** 2026-02-05 05:26:59.860544 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-02-05 05:26:59.860550 | orchestrator | 2026-02-05 05:26:59.860556 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-05 05:26:59.860562 | orchestrator | Thursday 05 February 2026 05:26:50 +0000 (0:00:01.186) 0:46:35.295 ***** 2026-02-05 05:26:59.860569 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-05 05:26:59.860575 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-05 05:26:59.860581 | orchestrator | 2026-02-05 05:26:59.860587 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-05 05:26:59.860593 | orchestrator | Thursday 05 February 2026 05:26:52 +0000 (0:00:01.840) 0:46:37.136 ***** 2026-02-05 05:26:59.860599 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:26:59.860606 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 05:26:59.860619 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 05:26:59.860625 | orchestrator | 2026-02-05 05:26:59.860631 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:26:59.860637 | orchestrator | Thursday 05 February 2026 05:26:55 +0000 (0:00:03.430) 0:46:40.566 ***** 2026-02-05 05:26:59.860644 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-05 05:26:59.860650 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 05:26:59.860659 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:26:59.860670 | orchestrator | 2026-02-05 05:26:59.860680 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-05 05:26:59.860689 | orchestrator | Thursday 05 February 2026 05:26:57 +0000 (0:00:01.646) 0:46:42.213 ***** 2026-02-05 05:26:59.860699 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.860708 | orchestrator | 2026-02-05 05:26:59.860719 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-05 05:26:59.860730 | orchestrator | Thursday 05 February 2026 05:26:58 +0000 (0:00:00.868) 0:46:43.081 ***** 2026-02-05 05:26:59.860741 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.860751 | orchestrator | 2026-02-05 05:26:59.860762 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-05 05:26:59.860769 | orchestrator | Thursday 05 February 2026 05:26:59 +0000 (0:00:00.770) 0:46:43.852 ***** 2026-02-05 05:26:59.860775 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:26:59.860781 | orchestrator | 2026-02-05 05:26:59.860795 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-05 05:29:23.390765 | orchestrator | Thursday 05 February 2026 05:26:59 +0000 (0:00:00.813) 0:46:44.666 ***** 2026-02-05 05:29:23.390864 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-02-05 05:29:23.390877 | orchestrator | 2026-02-05 05:29:23.390884 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-05 05:29:23.390889 | orchestrator | Thursday 05 February 2026 05:27:00 +0000 (0:00:01.105) 0:46:45.772 ***** 2026-02-05 05:29:23.390894 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:29:23.390901 | orchestrator | 2026-02-05 05:29:23.390906 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-05 05:29:23.390911 | orchestrator | Thursday 05 February 2026 05:27:02 +0000 (0:00:01.494) 0:46:47.266 ***** 2026-02-05 05:29:23.390916 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:29:23.390937 | orchestrator | 2026-02-05 05:29:23.390942 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-05 05:29:23.390947 | orchestrator | Thursday 05 February 2026 05:27:05 +0000 (0:00:03.467) 0:46:50.734 ***** 2026-02-05 05:29:23.390952 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-02-05 05:29:23.390957 | orchestrator | 2026-02-05 05:29:23.390962 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-05 05:29:23.390967 | orchestrator | Thursday 05 February 2026 05:27:07 +0000 (0:00:01.210) 0:46:51.945 ***** 2026-02-05 05:29:23.390972 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:29:23.390977 | orchestrator | 2026-02-05 05:29:23.390982 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-05 05:29:23.390987 | orchestrator | Thursday 05 February 2026 05:27:09 +0000 (0:00:01.919) 0:46:53.865 ***** 2026-02-05 05:29:23.390992 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:29:23.390997 | orchestrator | 2026-02-05 05:29:23.391002 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-05 05:29:23.391007 | orchestrator | Thursday 05 February 2026 05:27:10 +0000 (0:00:01.925) 0:46:55.791 ***** 2026-02-05 05:29:23.391012 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:29:23.391016 | orchestrator | 2026-02-05 05:29:23.391021 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-05 05:29:23.391026 | orchestrator | Thursday 05 February 2026 05:27:13 +0000 (0:00:02.375) 0:46:58.166 ***** 2026-02-05 05:29:23.391031 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:29:23.391037 | orchestrator | 2026-02-05 05:29:23.391042 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-05 05:29:23.391047 | orchestrator | Thursday 05 February 2026 05:27:14 +0000 (0:00:01.145) 0:46:59.312 ***** 2026-02-05 05:29:23.391052 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:29:23.391056 | orchestrator | 2026-02-05 05:29:23.391061 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-05 05:29:23.391066 | orchestrator | Thursday 05 February 2026 05:27:15 +0000 (0:00:01.183) 0:47:00.495 ***** 2026-02-05 05:29:23.391071 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 05:29:23.391077 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-05 05:29:23.391081 | orchestrator | 2026-02-05 05:29:23.391086 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-05 05:29:23.391091 | orchestrator | Thursday 05 February 2026 05:27:17 +0000 (0:00:01.804) 0:47:02.300 ***** 2026-02-05 05:29:23.391096 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 05:29:23.391101 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-05 05:29:23.391106 | orchestrator | 2026-02-05 05:29:23.391111 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-05 05:29:23.391116 | orchestrator | Thursday 05 February 2026 05:27:20 +0000 (0:00:02.891) 0:47:05.192 ***** 2026-02-05 05:29:23.391121 | orchestrator | changed: [testbed-node-5] => (item=0) 2026-02-05 05:29:23.391126 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-05 05:29:23.391166 | orchestrator | 2026-02-05 05:29:23.391173 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-05 05:29:23.391179 | orchestrator | Thursday 05 February 2026 05:27:24 +0000 (0:00:04.309) 0:47:09.502 ***** 2026-02-05 05:29:23.391184 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:29:23.391189 | orchestrator | 2026-02-05 05:29:23.391194 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-05 05:29:23.391199 | orchestrator | Thursday 05 February 2026 05:27:25 +0000 (0:00:00.889) 0:47:10.391 ***** 2026-02-05 05:29:23.391204 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-05 05:29:23.391221 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:29:23.391227 | orchestrator | 2026-02-05 05:29:23.391232 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-05 05:29:23.391237 | orchestrator | Thursday 05 February 2026 05:27:38 +0000 (0:00:13.120) 0:47:23.512 ***** 2026-02-05 05:29:23.391246 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:29:23.391252 | orchestrator | 2026-02-05 05:29:23.391257 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-05 05:29:23.391262 | orchestrator | Thursday 05 February 2026 05:27:39 +0000 (0:00:00.853) 0:47:24.365 ***** 2026-02-05 05:29:23.391267 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:29:23.391272 | orchestrator | 2026-02-05 05:29:23.391277 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-05 05:29:23.391282 | orchestrator | Thursday 05 February 2026 05:27:40 +0000 (0:00:00.774) 0:47:25.140 ***** 2026-02-05 05:29:23.391288 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:29:23.391293 | orchestrator | 2026-02-05 05:29:23.391298 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-05 05:29:23.391303 | orchestrator | Thursday 05 February 2026 05:27:41 +0000 (0:00:00.783) 0:47:25.923 ***** 2026-02-05 05:29:23.391308 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:29:23.391313 | orchestrator | 2026-02-05 05:29:23.391320 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-02-05 05:29:23.391328 | orchestrator | 2026-02-05 05:29:23.391351 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:29:23.391360 | orchestrator | Thursday 05 February 2026 05:27:43 +0000 (0:00:02.636) 0:47:28.560 ***** 2026-02-05 05:29:23.391368 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:29:23.391376 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:29:23.391385 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:29:23.391392 | orchestrator | 2026-02-05 05:29:23.391400 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:29:23.391407 | orchestrator | Thursday 05 February 2026 05:27:45 +0000 (0:00:01.755) 0:47:30.316 ***** 2026-02-05 05:29:23.391413 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:29:23.391420 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:29:23.391428 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:29:23.391436 | orchestrator | 2026-02-05 05:29:23.391444 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-02-05 05:29:23.391453 | orchestrator | Thursday 05 February 2026 05:27:47 +0000 (0:00:01.514) 0:47:31.830 ***** 2026-02-05 05:29:23.391462 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-05 05:29:23.391470 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-05 05:29:23.391479 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-05 05:29:23.391490 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-05 05:29:23.391498 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-05 05:29:23.391504 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-05 05:29:23.391510 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-05 05:29:23.391515 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-05 05:29:23.391521 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-05 05:29:23.391526 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-05 05:29:23.391532 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-05 05:29:23.391538 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-05 05:29:23.391549 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-05 05:29:23.391555 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-05 05:29:23.391560 | orchestrator | 2026-02-05 05:29:23.391566 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-02-05 05:29:23.391571 | orchestrator | Thursday 05 February 2026 05:29:05 +0000 (0:01:18.904) 0:48:50.735 ***** 2026-02-05 05:29:23.391577 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-05 05:29:23.391583 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-05 05:29:23.391588 | orchestrator | 2026-02-05 05:29:23.391594 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-02-05 05:29:23.391600 | orchestrator | Thursday 05 February 2026 05:29:12 +0000 (0:00:06.670) 0:48:57.405 ***** 2026-02-05 05:29:23.391605 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:29:23.391611 | orchestrator | 2026-02-05 05:29:23.391617 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-02-05 05:29:23.391622 | orchestrator | 2026-02-05 05:29:23.391628 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:29:23.391633 | orchestrator | Thursday 05 February 2026 05:29:15 +0000 (0:00:03.271) 0:49:00.677 ***** 2026-02-05 05:29:23.391639 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-05 05:29:23.391645 | orchestrator | 2026-02-05 05:29:23.391654 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 05:29:23.391660 | orchestrator | Thursday 05 February 2026 05:29:16 +0000 (0:00:01.111) 0:49:01.788 ***** 2026-02-05 05:29:23.391666 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:29:23.391671 | orchestrator | 2026-02-05 05:29:23.391677 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 05:29:23.391683 | orchestrator | Thursday 05 February 2026 05:29:18 +0000 (0:00:01.502) 0:49:03.291 ***** 2026-02-05 05:29:23.391689 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:29:23.391694 | orchestrator | 2026-02-05 05:29:23.391700 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:29:23.391706 | orchestrator | Thursday 05 February 2026 05:29:19 +0000 (0:00:01.127) 0:49:04.419 ***** 2026-02-05 05:29:23.391711 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:29:23.391716 | orchestrator | 2026-02-05 05:29:23.391721 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:29:23.391726 | orchestrator | Thursday 05 February 2026 05:29:21 +0000 (0:00:01.477) 0:49:05.897 ***** 2026-02-05 05:29:23.391731 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:29:23.391735 | orchestrator | 2026-02-05 05:29:23.391740 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 05:29:23.391745 | orchestrator | Thursday 05 February 2026 05:29:22 +0000 (0:00:01.124) 0:49:07.022 ***** 2026-02-05 05:29:23.391750 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:29:23.391755 | orchestrator | 2026-02-05 05:29:23.391759 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 05:29:23.391769 | orchestrator | Thursday 05 February 2026 05:29:23 +0000 (0:00:01.177) 0:49:08.200 ***** 2026-02-05 05:29:47.762585 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:29:47.762673 | orchestrator | 2026-02-05 05:29:47.762680 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 05:29:47.762686 | orchestrator | Thursday 05 February 2026 05:29:24 +0000 (0:00:01.146) 0:49:09.346 ***** 2026-02-05 05:29:47.762692 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:47.762697 | orchestrator | 2026-02-05 05:29:47.762701 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 05:29:47.762706 | orchestrator | Thursday 05 February 2026 05:29:25 +0000 (0:00:01.134) 0:49:10.481 ***** 2026-02-05 05:29:47.762710 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:29:47.762714 | orchestrator | 2026-02-05 05:29:47.762719 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 05:29:47.762738 | orchestrator | Thursday 05 February 2026 05:29:26 +0000 (0:00:01.111) 0:49:11.592 ***** 2026-02-05 05:29:47.762745 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 05:29:47.762751 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:29:47.762758 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:29:47.762763 | orchestrator | 2026-02-05 05:29:47.762769 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 05:29:47.762776 | orchestrator | Thursday 05 February 2026 05:29:28 +0000 (0:00:01.676) 0:49:13.268 ***** 2026-02-05 05:29:47.762782 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:29:47.762788 | orchestrator | 2026-02-05 05:29:47.762794 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 05:29:47.762802 | orchestrator | Thursday 05 February 2026 05:29:29 +0000 (0:00:01.264) 0:49:14.533 ***** 2026-02-05 05:29:47.762806 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 05:29:47.762810 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:29:47.762814 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:29:47.762818 | orchestrator | 2026-02-05 05:29:47.762822 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 05:29:47.762826 | orchestrator | Thursday 05 February 2026 05:29:32 +0000 (0:00:02.979) 0:49:17.513 ***** 2026-02-05 05:29:47.762830 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 05:29:47.762834 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 05:29:47.762838 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 05:29:47.762842 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:47.762846 | orchestrator | 2026-02-05 05:29:47.762850 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 05:29:47.762854 | orchestrator | Thursday 05 February 2026 05:29:34 +0000 (0:00:01.435) 0:49:18.948 ***** 2026-02-05 05:29:47.762858 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 05:29:47.762865 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 05:29:47.762869 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 05:29:47.762873 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:47.762877 | orchestrator | 2026-02-05 05:29:47.762881 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 05:29:47.762885 | orchestrator | Thursday 05 February 2026 05:29:35 +0000 (0:00:01.695) 0:49:20.644 ***** 2026-02-05 05:29:47.762901 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:29:47.762908 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:29:47.762927 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:29:47.762931 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:47.762935 | orchestrator | 2026-02-05 05:29:47.762939 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 05:29:47.762943 | orchestrator | Thursday 05 February 2026 05:29:37 +0000 (0:00:01.247) 0:49:21.892 ***** 2026-02-05 05:29:47.762960 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 05:29:30.291931', 'end': '2026-02-05 05:29:30.347503', 'delta': '0:00:00.055572', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 05:29:47.762966 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 05:29:30.890561', 'end': '2026-02-05 05:29:30.935545', 'delta': '0:00:00.044984', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 05:29:47.762971 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9163e99c5c4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 05:29:31.525631', 'end': '2026-02-05 05:29:31.570634', 'delta': '0:00:00.045003', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9163e99c5c4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 05:29:47.762975 | orchestrator | 2026-02-05 05:29:47.762979 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 05:29:47.762983 | orchestrator | Thursday 05 February 2026 05:29:38 +0000 (0:00:01.221) 0:49:23.113 ***** 2026-02-05 05:29:47.762987 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:29:47.762991 | orchestrator | 2026-02-05 05:29:47.762995 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 05:29:47.762999 | orchestrator | Thursday 05 February 2026 05:29:39 +0000 (0:00:01.245) 0:49:24.359 ***** 2026-02-05 05:29:47.763003 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:47.763007 | orchestrator | 2026-02-05 05:29:47.763014 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 05:29:47.763022 | orchestrator | Thursday 05 February 2026 05:29:40 +0000 (0:00:01.266) 0:49:25.625 ***** 2026-02-05 05:29:47.763026 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:29:47.763030 | orchestrator | 2026-02-05 05:29:47.763034 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 05:29:47.763038 | orchestrator | Thursday 05 February 2026 05:29:41 +0000 (0:00:01.119) 0:49:26.744 ***** 2026-02-05 05:29:47.763042 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:29:47.763046 | orchestrator | 2026-02-05 05:29:47.763050 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:29:47.763054 | orchestrator | Thursday 05 February 2026 05:29:44 +0000 (0:00:02.299) 0:49:29.044 ***** 2026-02-05 05:29:47.763096 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:29:47.763101 | orchestrator | 2026-02-05 05:29:47.763105 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 05:29:47.763109 | orchestrator | Thursday 05 February 2026 05:29:45 +0000 (0:00:01.165) 0:49:30.209 ***** 2026-02-05 05:29:47.763113 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:47.763117 | orchestrator | 2026-02-05 05:29:47.763121 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 05:29:47.763125 | orchestrator | Thursday 05 February 2026 05:29:46 +0000 (0:00:01.134) 0:49:31.343 ***** 2026-02-05 05:29:47.763128 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:47.763132 | orchestrator | 2026-02-05 05:29:47.763136 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:29:47.763144 | orchestrator | Thursday 05 February 2026 05:29:47 +0000 (0:00:01.228) 0:49:32.572 ***** 2026-02-05 05:29:58.271898 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:58.271987 | orchestrator | 2026-02-05 05:29:58.271996 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 05:29:58.272002 | orchestrator | Thursday 05 February 2026 05:29:48 +0000 (0:00:01.147) 0:49:33.720 ***** 2026-02-05 05:29:58.272007 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:58.272011 | orchestrator | 2026-02-05 05:29:58.272015 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 05:29:58.272019 | orchestrator | Thursday 05 February 2026 05:29:50 +0000 (0:00:01.127) 0:49:34.847 ***** 2026-02-05 05:29:58.272023 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:58.272071 | orchestrator | 2026-02-05 05:29:58.272078 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 05:29:58.272086 | orchestrator | Thursday 05 February 2026 05:29:51 +0000 (0:00:01.142) 0:49:35.990 ***** 2026-02-05 05:29:58.272090 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:58.272096 | orchestrator | 2026-02-05 05:29:58.272102 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 05:29:58.272108 | orchestrator | Thursday 05 February 2026 05:29:52 +0000 (0:00:01.122) 0:49:37.113 ***** 2026-02-05 05:29:58.272112 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:58.272116 | orchestrator | 2026-02-05 05:29:58.272120 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 05:29:58.272124 | orchestrator | Thursday 05 February 2026 05:29:53 +0000 (0:00:01.132) 0:49:38.246 ***** 2026-02-05 05:29:58.272128 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:58.272132 | orchestrator | 2026-02-05 05:29:58.272136 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 05:29:58.272140 | orchestrator | Thursday 05 February 2026 05:29:54 +0000 (0:00:01.126) 0:49:39.373 ***** 2026-02-05 05:29:58.272144 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:58.272148 | orchestrator | 2026-02-05 05:29:58.272153 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 05:29:58.272159 | orchestrator | Thursday 05 February 2026 05:29:55 +0000 (0:00:01.113) 0:49:40.487 ***** 2026-02-05 05:29:58.272167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:29:58.272196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:29:58.272203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:29:58.272224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:29:58.272232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:29:58.272253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:29:58.272260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:29:58.272268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7aa79787', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:29:58.272282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:29:58.272292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:29:58.272298 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:29:58.272304 | orchestrator | 2026-02-05 05:29:58.272310 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 05:29:58.272315 | orchestrator | Thursday 05 February 2026 05:29:56 +0000 (0:00:01.310) 0:49:41.797 ***** 2026-02-05 05:29:58.272326 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:30:02.494378 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:30:02.494472 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:30:02.494507 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:30:02.494517 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:30:02.494538 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:30:02.494545 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:30:02.494572 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7aa79787', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1', 'scsi-SQEMU_QEMU_HARDDISK_7aa79787-b159-4a57-a4f1-e1205678d581-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:30:02.494597 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:30:02.494609 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:30:02.494618 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:30:02.494627 | orchestrator | 2026-02-05 05:30:02.494636 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 05:30:02.494646 | orchestrator | Thursday 05 February 2026 05:29:58 +0000 (0:00:01.286) 0:49:43.084 ***** 2026-02-05 05:30:02.494653 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:30:02.494662 | orchestrator | 2026-02-05 05:30:02.494669 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 05:30:02.494676 | orchestrator | Thursday 05 February 2026 05:29:59 +0000 (0:00:01.537) 0:49:44.622 ***** 2026-02-05 05:30:02.494684 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:30:02.494691 | orchestrator | 2026-02-05 05:30:02.494698 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:30:02.494705 | orchestrator | Thursday 05 February 2026 05:30:00 +0000 (0:00:01.163) 0:49:45.785 ***** 2026-02-05 05:30:02.494711 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:30:02.494718 | orchestrator | 2026-02-05 05:30:02.494725 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:30:02.494739 | orchestrator | Thursday 05 February 2026 05:30:02 +0000 (0:00:01.519) 0:49:47.305 ***** 2026-02-05 05:30:55.235855 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:30:55.235969 | orchestrator | 2026-02-05 05:30:55.235981 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:30:55.235990 | orchestrator | Thursday 05 February 2026 05:30:03 +0000 (0:00:01.144) 0:49:48.450 ***** 2026-02-05 05:30:55.236016 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:30:55.236020 | orchestrator | 2026-02-05 05:30:55.236024 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:30:55.236028 | orchestrator | Thursday 05 February 2026 05:30:04 +0000 (0:00:01.200) 0:49:49.650 ***** 2026-02-05 05:30:55.236032 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:30:55.236035 | orchestrator | 2026-02-05 05:30:55.236039 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 05:30:55.236043 | orchestrator | Thursday 05 February 2026 05:30:05 +0000 (0:00:01.145) 0:49:50.796 ***** 2026-02-05 05:30:55.236048 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 05:30:55.236053 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 05:30:55.236056 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 05:30:55.236060 | orchestrator | 2026-02-05 05:30:55.236064 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 05:30:55.236067 | orchestrator | Thursday 05 February 2026 05:30:07 +0000 (0:00:01.719) 0:49:52.515 ***** 2026-02-05 05:30:55.236071 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 05:30:55.236076 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 05:30:55.236079 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 05:30:55.236083 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:30:55.236087 | orchestrator | 2026-02-05 05:30:55.236093 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 05:30:55.236099 | orchestrator | Thursday 05 February 2026 05:30:08 +0000 (0:00:01.190) 0:49:53.705 ***** 2026-02-05 05:30:55.236113 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:30:55.236119 | orchestrator | 2026-02-05 05:30:55.236126 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 05:30:55.236132 | orchestrator | Thursday 05 February 2026 05:30:10 +0000 (0:00:01.129) 0:49:54.835 ***** 2026-02-05 05:30:55.236138 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 05:30:55.236145 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:30:55.236152 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:30:55.236158 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:30:55.236165 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:30:55.236170 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:30:55.236177 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:30:55.236181 | orchestrator | 2026-02-05 05:30:55.236185 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 05:30:55.236189 | orchestrator | Thursday 05 February 2026 05:30:12 +0000 (0:00:02.105) 0:49:56.941 ***** 2026-02-05 05:30:55.236192 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 05:30:55.236196 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:30:55.236200 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:30:55.236203 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:30:55.236207 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:30:55.236211 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:30:55.236224 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:30:55.236228 | orchestrator | 2026-02-05 05:30:55.236232 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-02-05 05:30:55.236240 | orchestrator | Thursday 05 February 2026 05:30:14 +0000 (0:00:02.623) 0:49:59.564 ***** 2026-02-05 05:30:55.236244 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:30:55.236248 | orchestrator | 2026-02-05 05:30:55.236253 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-02-05 05:30:55.236259 | orchestrator | Thursday 05 February 2026 05:30:18 +0000 (0:00:03.387) 0:50:02.952 ***** 2026-02-05 05:30:55.236265 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:30:55.236272 | orchestrator | 2026-02-05 05:30:55.236277 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-02-05 05:30:55.236283 | orchestrator | Thursday 05 February 2026 05:30:21 +0000 (0:00:03.328) 0:50:06.280 ***** 2026-02-05 05:30:55.236290 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:30:55.236296 | orchestrator | 2026-02-05 05:30:55.236302 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-02-05 05:30:55.236308 | orchestrator | Thursday 05 February 2026 05:30:23 +0000 (0:00:02.413) 0:50:08.693 ***** 2026-02-05 05:30:55.236333 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4734', 'value': {'gid': 4734, 'name': 'testbed-node-5', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.15:6817/1578773080', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.15:6816', 'nonce': 1578773080}, {'type': 'v1', 'addr': '192.168.16.15:6817', 'nonce': 1578773080}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-02-05 05:30:55.236340 | orchestrator | 2026-02-05 05:30:55.236344 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-02-05 05:30:55.236348 | orchestrator | Thursday 05 February 2026 05:30:25 +0000 (0:00:01.193) 0:50:09.887 ***** 2026-02-05 05:30:55.236352 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 05:30:55.236356 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 05:30:55.236359 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-5) 2026-02-05 05:30:55.236363 | orchestrator | 2026-02-05 05:30:55.236367 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-02-05 05:30:55.236371 | orchestrator | Thursday 05 February 2026 05:30:26 +0000 (0:00:01.509) 0:50:11.396 ***** 2026-02-05 05:30:55.236374 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-02-05 05:30:55.236378 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-02-05 05:30:55.236382 | orchestrator | 2026-02-05 05:30:55.236385 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-02-05 05:30:55.236389 | orchestrator | Thursday 05 February 2026 05:30:28 +0000 (0:00:01.444) 0:50:12.840 ***** 2026-02-05 05:30:55.236393 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:30:55.236397 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:30:55.236401 | orchestrator | 2026-02-05 05:30:55.236406 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-02-05 05:30:55.236412 | orchestrator | Thursday 05 February 2026 05:30:36 +0000 (0:00:08.804) 0:50:21.644 ***** 2026-02-05 05:30:55.236418 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:30:55.236425 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:30:55.236431 | orchestrator | 2026-02-05 05:30:55.236437 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-02-05 05:30:55.236445 | orchestrator | Thursday 05 February 2026 05:30:40 +0000 (0:00:03.916) 0:50:25.561 ***** 2026-02-05 05:30:55.236453 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:30:55.236458 | orchestrator | 2026-02-05 05:30:55.236462 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-02-05 05:30:55.236466 | orchestrator | Thursday 05 February 2026 05:30:42 +0000 (0:00:02.245) 0:50:27.807 ***** 2026-02-05 05:30:55.236471 | orchestrator | changed: [testbed-node-0] 2026-02-05 05:30:55.236475 | orchestrator | 2026-02-05 05:30:55.236479 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-02-05 05:30:55.236484 | orchestrator | 2026-02-05 05:30:55.236488 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:30:55.236492 | orchestrator | Thursday 05 February 2026 05:30:44 +0000 (0:00:01.508) 0:50:29.316 ***** 2026-02-05 05:30:55.236497 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-05 05:30:55.236501 | orchestrator | 2026-02-05 05:30:55.236505 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 05:30:55.236510 | orchestrator | Thursday 05 February 2026 05:30:45 +0000 (0:00:01.115) 0:50:30.431 ***** 2026-02-05 05:30:55.236514 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:30:55.236518 | orchestrator | 2026-02-05 05:30:55.236523 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 05:30:55.236530 | orchestrator | Thursday 05 February 2026 05:30:47 +0000 (0:00:01.411) 0:50:31.842 ***** 2026-02-05 05:30:55.236535 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:30:55.236539 | orchestrator | 2026-02-05 05:30:55.236543 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:30:55.236547 | orchestrator | Thursday 05 February 2026 05:30:48 +0000 (0:00:01.124) 0:50:32.967 ***** 2026-02-05 05:30:55.236552 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:30:55.236556 | orchestrator | 2026-02-05 05:30:55.236560 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:30:55.236565 | orchestrator | Thursday 05 February 2026 05:30:49 +0000 (0:00:01.462) 0:50:34.430 ***** 2026-02-05 05:30:55.236569 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:30:55.236574 | orchestrator | 2026-02-05 05:30:55.236579 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 05:30:55.236586 | orchestrator | Thursday 05 February 2026 05:30:50 +0000 (0:00:01.154) 0:50:35.584 ***** 2026-02-05 05:30:55.236592 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:30:55.236598 | orchestrator | 2026-02-05 05:30:55.236605 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 05:30:55.236612 | orchestrator | Thursday 05 February 2026 05:30:51 +0000 (0:00:01.108) 0:50:36.693 ***** 2026-02-05 05:30:55.236618 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:30:55.236625 | orchestrator | 2026-02-05 05:30:55.236631 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 05:30:55.236636 | orchestrator | Thursday 05 February 2026 05:30:52 +0000 (0:00:01.122) 0:50:37.815 ***** 2026-02-05 05:30:55.236640 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:30:55.236644 | orchestrator | 2026-02-05 05:30:55.236649 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 05:30:55.236653 | orchestrator | Thursday 05 February 2026 05:30:54 +0000 (0:00:01.128) 0:50:38.944 ***** 2026-02-05 05:30:55.236658 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:30:55.236662 | orchestrator | 2026-02-05 05:30:55.236670 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 05:31:19.673285 | orchestrator | Thursday 05 February 2026 05:30:55 +0000 (0:00:01.103) 0:50:40.047 ***** 2026-02-05 05:31:19.673366 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:31:19.673373 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:31:19.673378 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:31:19.673383 | orchestrator | 2026-02-05 05:31:19.673403 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 05:31:19.673407 | orchestrator | Thursday 05 February 2026 05:30:56 +0000 (0:00:01.627) 0:50:41.675 ***** 2026-02-05 05:31:19.673411 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:31:19.673416 | orchestrator | 2026-02-05 05:31:19.673421 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 05:31:19.673425 | orchestrator | Thursday 05 February 2026 05:30:58 +0000 (0:00:01.210) 0:50:42.886 ***** 2026-02-05 05:31:19.673429 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:31:19.673433 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:31:19.673437 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:31:19.673441 | orchestrator | 2026-02-05 05:31:19.673445 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 05:31:19.673448 | orchestrator | Thursday 05 February 2026 05:31:01 +0000 (0:00:03.229) 0:50:46.116 ***** 2026-02-05 05:31:19.673453 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 05:31:19.673458 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 05:31:19.673462 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 05:31:19.673466 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:31:19.673469 | orchestrator | 2026-02-05 05:31:19.673473 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 05:31:19.673477 | orchestrator | Thursday 05 February 2026 05:31:02 +0000 (0:00:01.395) 0:50:47.512 ***** 2026-02-05 05:31:19.673482 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 05:31:19.673488 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 05:31:19.673492 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 05:31:19.673496 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:31:19.673500 | orchestrator | 2026-02-05 05:31:19.673504 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 05:31:19.673508 | orchestrator | Thursday 05 February 2026 05:31:04 +0000 (0:00:01.877) 0:50:49.389 ***** 2026-02-05 05:31:19.673533 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:19.673541 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:19.673545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:19.673553 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:31:19.673557 | orchestrator | 2026-02-05 05:31:19.673561 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 05:31:19.673565 | orchestrator | Thursday 05 February 2026 05:31:05 +0000 (0:00:01.191) 0:50:50.581 ***** 2026-02-05 05:31:19.673581 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 05:30:58.632461', 'end': '2026-02-05 05:30:58.681125', 'delta': '0:00:00.048664', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 05:31:19.673590 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 05:30:59.259501', 'end': '2026-02-05 05:30:59.294799', 'delta': '0:00:00.035298', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 05:31:19.673598 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9163e99c5c4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 05:31:00.099538', 'end': '2026-02-05 05:31:00.141004', 'delta': '0:00:00.041466', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9163e99c5c4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 05:31:19.673605 | orchestrator | 2026-02-05 05:31:19.673611 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 05:31:19.673618 | orchestrator | Thursday 05 February 2026 05:31:06 +0000 (0:00:01.192) 0:50:51.774 ***** 2026-02-05 05:31:19.673624 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:31:19.673631 | orchestrator | 2026-02-05 05:31:19.673638 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 05:31:19.673645 | orchestrator | Thursday 05 February 2026 05:31:08 +0000 (0:00:01.560) 0:50:53.335 ***** 2026-02-05 05:31:19.673651 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:31:19.673658 | orchestrator | 2026-02-05 05:31:19.673663 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 05:31:19.673667 | orchestrator | Thursday 05 February 2026 05:31:09 +0000 (0:00:01.202) 0:50:54.537 ***** 2026-02-05 05:31:19.673674 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:31:19.673678 | orchestrator | 2026-02-05 05:31:19.673685 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 05:31:19.673689 | orchestrator | Thursday 05 February 2026 05:31:10 +0000 (0:00:01.114) 0:50:55.651 ***** 2026-02-05 05:31:19.673693 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:31:19.673697 | orchestrator | 2026-02-05 05:31:19.673705 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:31:19.673709 | orchestrator | Thursday 05 February 2026 05:31:12 +0000 (0:00:02.013) 0:50:57.665 ***** 2026-02-05 05:31:19.673713 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:31:19.673717 | orchestrator | 2026-02-05 05:31:19.673721 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 05:31:19.673725 | orchestrator | Thursday 05 February 2026 05:31:13 +0000 (0:00:01.141) 0:50:58.807 ***** 2026-02-05 05:31:19.673729 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:31:19.673733 | orchestrator | 2026-02-05 05:31:19.673737 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 05:31:19.673741 | orchestrator | Thursday 05 February 2026 05:31:15 +0000 (0:00:01.102) 0:50:59.909 ***** 2026-02-05 05:31:19.673744 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:31:19.673748 | orchestrator | 2026-02-05 05:31:19.673752 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:31:19.673756 | orchestrator | Thursday 05 February 2026 05:31:16 +0000 (0:00:01.212) 0:51:01.121 ***** 2026-02-05 05:31:19.673760 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:31:19.673764 | orchestrator | 2026-02-05 05:31:19.673768 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 05:31:19.673772 | orchestrator | Thursday 05 February 2026 05:31:17 +0000 (0:00:01.104) 0:51:02.226 ***** 2026-02-05 05:31:19.673776 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:31:19.673780 | orchestrator | 2026-02-05 05:31:19.673783 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 05:31:19.673787 | orchestrator | Thursday 05 February 2026 05:31:18 +0000 (0:00:01.107) 0:51:03.334 ***** 2026-02-05 05:31:19.673795 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:31:24.905146 | orchestrator | 2026-02-05 05:31:24.905245 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 05:31:24.905259 | orchestrator | Thursday 05 February 2026 05:31:19 +0000 (0:00:01.149) 0:51:04.483 ***** 2026-02-05 05:31:24.905267 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:31:24.905275 | orchestrator | 2026-02-05 05:31:24.905282 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 05:31:24.905289 | orchestrator | Thursday 05 February 2026 05:31:20 +0000 (0:00:01.120) 0:51:05.604 ***** 2026-02-05 05:31:24.905296 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:31:24.905303 | orchestrator | 2026-02-05 05:31:24.905310 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 05:31:24.905316 | orchestrator | Thursday 05 February 2026 05:31:21 +0000 (0:00:01.184) 0:51:06.789 ***** 2026-02-05 05:31:24.905323 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:31:24.905329 | orchestrator | 2026-02-05 05:31:24.905336 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 05:31:24.905343 | orchestrator | Thursday 05 February 2026 05:31:23 +0000 (0:00:01.168) 0:51:07.957 ***** 2026-02-05 05:31:24.905349 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:31:24.905355 | orchestrator | 2026-02-05 05:31:24.905362 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 05:31:24.905368 | orchestrator | Thursday 05 February 2026 05:31:24 +0000 (0:00:01.126) 0:51:09.083 ***** 2026-02-05 05:31:24.905376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:31:24.905387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565', 'dm-uuid-LVM-vN6SqmnZs4OEgki7muUGb3CX2rpgO9JjiNwKDjdU3U6P9o8RLpsOeeot25aaAr4C'], 'uuids': ['85a8f83c-eeb5-49b7-8fd6-02ada4ea1f5a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C']}})  2026-02-05 05:31:24.905428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc', 'scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b9ba281', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:31:24.905437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-s8rEz7-ppR5-3mX9-9SVK-AT2X-wlWd-qt0ARf', 'scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9', 'scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be']}})  2026-02-05 05:31:24.905445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:31:24.905465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:31:24.905473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:31:24.905480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:31:24.905487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS', 'dm-uuid-CRYPT-LUKS2-39f72013c68f483e935747f3038f3162-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:31:24.905498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:31:24.905509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be', 'dm-uuid-LVM-2cW2aDbCF7Qvd1HDyT5MPDeJBzJFIyWajOrxUSy4sPZH0JqYli0dE22RqjUl99AS'], 'uuids': ['39f72013-c68f-483e-9357-47f3038f3162'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS']}})  2026-02-05 05:31:24.905516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-j8R0nG-W0YC-WK20-RGGA-JPgY-3scR-ZQIgrc', 'scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49', 'scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565']}})  2026-02-05 05:31:24.905528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:31:26.284554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62c048b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:31:26.284684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:31:26.284720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:31:26.284734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C', 'dm-uuid-CRYPT-LUKS2-85a8f83ceeb549b78fd602ada4ea1f5a-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:31:26.284748 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:31:26.284763 | orchestrator | 2026-02-05 05:31:26.284775 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 05:31:26.284787 | orchestrator | Thursday 05 February 2026 05:31:26 +0000 (0:00:01.805) 0:51:10.888 ***** 2026-02-05 05:31:26.284893 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:26.284920 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565', 'dm-uuid-LVM-vN6SqmnZs4OEgki7muUGb3CX2rpgO9JjiNwKDjdU3U6P9o8RLpsOeeot25aaAr4C'], 'uuids': ['85a8f83c-eeb5-49b7-8fd6-02ada4ea1f5a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:26.284956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc', 'scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b9ba281', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:26.284984 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-s8rEz7-ppR5-3mX9-9SVK-AT2X-wlWd-qt0ARf', 'scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9', 'scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:26.285005 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:26.285037 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:27.448408 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:27.448496 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:27.448504 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS', 'dm-uuid-CRYPT-LUKS2-39f72013c68f483e935747f3038f3162-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:27.448518 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:27.448524 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be', 'dm-uuid-LVM-2cW2aDbCF7Qvd1HDyT5MPDeJBzJFIyWajOrxUSy4sPZH0JqYli0dE22RqjUl99AS'], 'uuids': ['39f72013-c68f-483e-9357-47f3038f3162'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:27.448541 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-j8R0nG-W0YC-WK20-RGGA-JPgY-3scR-ZQIgrc', 'scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49', 'scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:27.448551 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:27.448559 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62c048b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:27.448565 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:31:27.448572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:32:01.183271 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C', 'dm-uuid-CRYPT-LUKS2-85a8f83ceeb549b78fd602ada4ea1f5a-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:32:01.183369 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.183376 | orchestrator | 2026-02-05 05:32:01.183383 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 05:32:01.183388 | orchestrator | Thursday 05 February 2026 05:31:27 +0000 (0:00:01.371) 0:51:12.259 ***** 2026-02-05 05:32:01.183392 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:01.183397 | orchestrator | 2026-02-05 05:32:01.183401 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 05:32:01.183406 | orchestrator | Thursday 05 February 2026 05:31:28 +0000 (0:00:01.497) 0:51:13.757 ***** 2026-02-05 05:32:01.183410 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:01.183414 | orchestrator | 2026-02-05 05:32:01.183419 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:32:01.183423 | orchestrator | Thursday 05 February 2026 05:31:30 +0000 (0:00:01.101) 0:51:14.858 ***** 2026-02-05 05:32:01.183427 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:01.183431 | orchestrator | 2026-02-05 05:32:01.183435 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:32:01.183439 | orchestrator | Thursday 05 February 2026 05:31:31 +0000 (0:00:01.467) 0:51:16.325 ***** 2026-02-05 05:32:01.183443 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.183447 | orchestrator | 2026-02-05 05:32:01.183451 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:32:01.183455 | orchestrator | Thursday 05 February 2026 05:31:32 +0000 (0:00:01.160) 0:51:17.485 ***** 2026-02-05 05:32:01.183459 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.183463 | orchestrator | 2026-02-05 05:32:01.183467 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:32:01.183484 | orchestrator | Thursday 05 February 2026 05:31:33 +0000 (0:00:01.241) 0:51:18.727 ***** 2026-02-05 05:32:01.183488 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.183492 | orchestrator | 2026-02-05 05:32:01.183496 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 05:32:01.183500 | orchestrator | Thursday 05 February 2026 05:31:35 +0000 (0:00:01.108) 0:51:19.836 ***** 2026-02-05 05:32:01.183505 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-05 05:32:01.183509 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-05 05:32:01.183513 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-05 05:32:01.183517 | orchestrator | 2026-02-05 05:32:01.183521 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 05:32:01.183525 | orchestrator | Thursday 05 February 2026 05:31:37 +0000 (0:00:01.998) 0:51:21.834 ***** 2026-02-05 05:32:01.183529 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 05:32:01.183534 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 05:32:01.183538 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 05:32:01.183542 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.183560 | orchestrator | 2026-02-05 05:32:01.183564 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 05:32:01.183568 | orchestrator | Thursday 05 February 2026 05:31:38 +0000 (0:00:01.127) 0:51:22.962 ***** 2026-02-05 05:32:01.183572 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-05 05:32:01.183577 | orchestrator | 2026-02-05 05:32:01.183582 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:32:01.183587 | orchestrator | Thursday 05 February 2026 05:31:39 +0000 (0:00:01.113) 0:51:24.076 ***** 2026-02-05 05:32:01.183591 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.183595 | orchestrator | 2026-02-05 05:32:01.183599 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:32:01.183603 | orchestrator | Thursday 05 February 2026 05:31:40 +0000 (0:00:01.126) 0:51:25.203 ***** 2026-02-05 05:32:01.183607 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.183611 | orchestrator | 2026-02-05 05:32:01.183615 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:32:01.183619 | orchestrator | Thursday 05 February 2026 05:31:41 +0000 (0:00:01.121) 0:51:26.324 ***** 2026-02-05 05:32:01.183623 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.183627 | orchestrator | 2026-02-05 05:32:01.183631 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:32:01.183635 | orchestrator | Thursday 05 February 2026 05:31:42 +0000 (0:00:01.120) 0:51:27.445 ***** 2026-02-05 05:32:01.183639 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:01.183643 | orchestrator | 2026-02-05 05:32:01.183647 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:32:01.183651 | orchestrator | Thursday 05 February 2026 05:31:43 +0000 (0:00:01.245) 0:51:28.690 ***** 2026-02-05 05:32:01.183655 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:32:01.183669 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:32:01.183673 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:32:01.183677 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.183681 | orchestrator | 2026-02-05 05:32:01.183685 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:32:01.183689 | orchestrator | Thursday 05 February 2026 05:31:45 +0000 (0:00:01.399) 0:51:30.089 ***** 2026-02-05 05:32:01.183693 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:32:01.183697 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:32:01.183701 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:32:01.183705 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.183709 | orchestrator | 2026-02-05 05:32:01.183713 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:32:01.183750 | orchestrator | Thursday 05 February 2026 05:31:46 +0000 (0:00:01.360) 0:51:31.449 ***** 2026-02-05 05:32:01.183754 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:32:01.183758 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:32:01.183762 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:32:01.183766 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.183770 | orchestrator | 2026-02-05 05:32:01.183774 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:32:01.183778 | orchestrator | Thursday 05 February 2026 05:31:47 +0000 (0:00:01.359) 0:51:32.809 ***** 2026-02-05 05:32:01.183782 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:01.183786 | orchestrator | 2026-02-05 05:32:01.183790 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:32:01.183794 | orchestrator | Thursday 05 February 2026 05:31:49 +0000 (0:00:01.157) 0:51:33.967 ***** 2026-02-05 05:32:01.183798 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 05:32:01.183806 | orchestrator | 2026-02-05 05:32:01.183810 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 05:32:01.183814 | orchestrator | Thursday 05 February 2026 05:31:50 +0000 (0:00:01.336) 0:51:35.303 ***** 2026-02-05 05:32:01.183818 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:32:01.183822 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:32:01.183828 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:32:01.183834 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:32:01.183844 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:32:01.183852 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-05 05:32:01.183858 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:32:01.183864 | orchestrator | 2026-02-05 05:32:01.183870 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 05:32:01.183880 | orchestrator | Thursday 05 February 2026 05:31:52 +0000 (0:00:02.078) 0:51:37.382 ***** 2026-02-05 05:32:01.183889 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:32:01.183896 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:32:01.183902 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:32:01.183908 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:32:01.183915 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:32:01.183921 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-05 05:32:01.183928 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:32:01.183933 | orchestrator | 2026-02-05 05:32:01.183939 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-02-05 05:32:01.183946 | orchestrator | Thursday 05 February 2026 05:31:55 +0000 (0:00:02.536) 0:51:39.919 ***** 2026-02-05 05:32:01.183952 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.183959 | orchestrator | 2026-02-05 05:32:01.183965 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:32:01.183971 | orchestrator | Thursday 05 February 2026 05:31:56 +0000 (0:00:01.146) 0:51:41.066 ***** 2026-02-05 05:32:01.183978 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-05 05:32:01.183984 | orchestrator | 2026-02-05 05:32:01.183991 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 05:32:01.183997 | orchestrator | Thursday 05 February 2026 05:31:57 +0000 (0:00:01.172) 0:51:42.238 ***** 2026-02-05 05:32:01.184004 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-05 05:32:01.184010 | orchestrator | 2026-02-05 05:32:01.184017 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 05:32:01.184024 | orchestrator | Thursday 05 February 2026 05:31:58 +0000 (0:00:01.128) 0:51:43.366 ***** 2026-02-05 05:32:01.184030 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:01.184037 | orchestrator | 2026-02-05 05:32:01.184043 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 05:32:01.184050 | orchestrator | Thursday 05 February 2026 05:31:59 +0000 (0:00:01.125) 0:51:44.492 ***** 2026-02-05 05:32:01.184056 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:01.184062 | orchestrator | 2026-02-05 05:32:01.184069 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 05:32:01.184082 | orchestrator | Thursday 05 February 2026 05:32:01 +0000 (0:00:01.499) 0:51:45.992 ***** 2026-02-05 05:32:50.093686 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:50.093783 | orchestrator | 2026-02-05 05:32:50.093797 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 05:32:50.093806 | orchestrator | Thursday 05 February 2026 05:32:02 +0000 (0:00:01.534) 0:51:47.527 ***** 2026-02-05 05:32:50.093814 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:50.093822 | orchestrator | 2026-02-05 05:32:50.093829 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 05:32:50.093837 | orchestrator | Thursday 05 February 2026 05:32:04 +0000 (0:00:01.510) 0:51:49.037 ***** 2026-02-05 05:32:50.093845 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.093853 | orchestrator | 2026-02-05 05:32:50.093861 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 05:32:50.093869 | orchestrator | Thursday 05 February 2026 05:32:05 +0000 (0:00:01.095) 0:51:50.132 ***** 2026-02-05 05:32:50.093876 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.093884 | orchestrator | 2026-02-05 05:32:50.093891 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 05:32:50.093898 | orchestrator | Thursday 05 February 2026 05:32:06 +0000 (0:00:01.185) 0:51:51.318 ***** 2026-02-05 05:32:50.093906 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.093914 | orchestrator | 2026-02-05 05:32:50.093921 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 05:32:50.093928 | orchestrator | Thursday 05 February 2026 05:32:07 +0000 (0:00:01.114) 0:51:52.433 ***** 2026-02-05 05:32:50.093936 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:50.093943 | orchestrator | 2026-02-05 05:32:50.093950 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 05:32:50.093957 | orchestrator | Thursday 05 February 2026 05:32:09 +0000 (0:00:01.507) 0:51:53.941 ***** 2026-02-05 05:32:50.093965 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:50.093972 | orchestrator | 2026-02-05 05:32:50.093980 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 05:32:50.093987 | orchestrator | Thursday 05 February 2026 05:32:10 +0000 (0:00:01.558) 0:51:55.499 ***** 2026-02-05 05:32:50.093994 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094002 | orchestrator | 2026-02-05 05:32:50.094009 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:32:50.094066 | orchestrator | Thursday 05 February 2026 05:32:11 +0000 (0:00:01.110) 0:51:56.610 ***** 2026-02-05 05:32:50.094075 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094082 | orchestrator | 2026-02-05 05:32:50.094090 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:32:50.094097 | orchestrator | Thursday 05 February 2026 05:32:12 +0000 (0:00:01.091) 0:51:57.702 ***** 2026-02-05 05:32:50.094105 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:50.094112 | orchestrator | 2026-02-05 05:32:50.094133 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:32:50.094141 | orchestrator | Thursday 05 February 2026 05:32:14 +0000 (0:00:01.124) 0:51:58.827 ***** 2026-02-05 05:32:50.094148 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:50.094155 | orchestrator | 2026-02-05 05:32:50.094163 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:32:50.094170 | orchestrator | Thursday 05 February 2026 05:32:15 +0000 (0:00:01.118) 0:51:59.945 ***** 2026-02-05 05:32:50.094177 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:50.094184 | orchestrator | 2026-02-05 05:32:50.094192 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:32:50.094199 | orchestrator | Thursday 05 February 2026 05:32:16 +0000 (0:00:01.115) 0:52:01.061 ***** 2026-02-05 05:32:50.094207 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094214 | orchestrator | 2026-02-05 05:32:50.094221 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:32:50.094229 | orchestrator | Thursday 05 February 2026 05:32:17 +0000 (0:00:01.098) 0:52:02.159 ***** 2026-02-05 05:32:50.094236 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094260 | orchestrator | 2026-02-05 05:32:50.094268 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:32:50.094276 | orchestrator | Thursday 05 February 2026 05:32:18 +0000 (0:00:01.121) 0:52:03.281 ***** 2026-02-05 05:32:50.094283 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094290 | orchestrator | 2026-02-05 05:32:50.094297 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:32:50.094305 | orchestrator | Thursday 05 February 2026 05:32:19 +0000 (0:00:01.110) 0:52:04.392 ***** 2026-02-05 05:32:50.094312 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:50.094319 | orchestrator | 2026-02-05 05:32:50.094326 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:32:50.094333 | orchestrator | Thursday 05 February 2026 05:32:20 +0000 (0:00:01.181) 0:52:05.574 ***** 2026-02-05 05:32:50.094341 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:50.094348 | orchestrator | 2026-02-05 05:32:50.094355 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:32:50.094362 | orchestrator | Thursday 05 February 2026 05:32:21 +0000 (0:00:01.172) 0:52:06.747 ***** 2026-02-05 05:32:50.094369 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094376 | orchestrator | 2026-02-05 05:32:50.094383 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:32:50.094391 | orchestrator | Thursday 05 February 2026 05:32:23 +0000 (0:00:01.108) 0:52:07.855 ***** 2026-02-05 05:32:50.094398 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094405 | orchestrator | 2026-02-05 05:32:50.094412 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:32:50.094419 | orchestrator | Thursday 05 February 2026 05:32:24 +0000 (0:00:01.141) 0:52:08.996 ***** 2026-02-05 05:32:50.094426 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094434 | orchestrator | 2026-02-05 05:32:50.094441 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:32:50.094448 | orchestrator | Thursday 05 February 2026 05:32:25 +0000 (0:00:01.115) 0:52:10.112 ***** 2026-02-05 05:32:50.094455 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094462 | orchestrator | 2026-02-05 05:32:50.094470 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:32:50.094491 | orchestrator | Thursday 05 February 2026 05:32:26 +0000 (0:00:01.137) 0:52:11.249 ***** 2026-02-05 05:32:50.094499 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094506 | orchestrator | 2026-02-05 05:32:50.094513 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:32:50.094520 | orchestrator | Thursday 05 February 2026 05:32:27 +0000 (0:00:01.116) 0:52:12.366 ***** 2026-02-05 05:32:50.094528 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094535 | orchestrator | 2026-02-05 05:32:50.094542 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:32:50.094549 | orchestrator | Thursday 05 February 2026 05:32:28 +0000 (0:00:01.143) 0:52:13.510 ***** 2026-02-05 05:32:50.094556 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094563 | orchestrator | 2026-02-05 05:32:50.094571 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:32:50.094579 | orchestrator | Thursday 05 February 2026 05:32:29 +0000 (0:00:01.088) 0:52:14.599 ***** 2026-02-05 05:32:50.094586 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094593 | orchestrator | 2026-02-05 05:32:50.094600 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:32:50.094608 | orchestrator | Thursday 05 February 2026 05:32:30 +0000 (0:00:01.102) 0:52:15.701 ***** 2026-02-05 05:32:50.094631 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094638 | orchestrator | 2026-02-05 05:32:50.094646 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:32:50.094653 | orchestrator | Thursday 05 February 2026 05:32:31 +0000 (0:00:01.053) 0:52:16.754 ***** 2026-02-05 05:32:50.094666 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094673 | orchestrator | 2026-02-05 05:32:50.094680 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:32:50.094687 | orchestrator | Thursday 05 February 2026 05:32:32 +0000 (0:00:01.058) 0:52:17.813 ***** 2026-02-05 05:32:50.094695 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094702 | orchestrator | 2026-02-05 05:32:50.094709 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:32:50.094716 | orchestrator | Thursday 05 February 2026 05:32:33 +0000 (0:00:00.961) 0:52:18.775 ***** 2026-02-05 05:32:50.094723 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094730 | orchestrator | 2026-02-05 05:32:50.094738 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:32:50.094745 | orchestrator | Thursday 05 February 2026 05:32:34 +0000 (0:00:00.900) 0:52:19.675 ***** 2026-02-05 05:32:50.094752 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:50.094759 | orchestrator | 2026-02-05 05:32:50.094766 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:32:50.094777 | orchestrator | Thursday 05 February 2026 05:32:36 +0000 (0:00:01.964) 0:52:21.639 ***** 2026-02-05 05:32:50.094785 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:50.094792 | orchestrator | 2026-02-05 05:32:50.094800 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:32:50.094807 | orchestrator | Thursday 05 February 2026 05:32:39 +0000 (0:00:02.261) 0:52:23.900 ***** 2026-02-05 05:32:50.094814 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-05 05:32:50.094822 | orchestrator | 2026-02-05 05:32:50.094829 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 05:32:50.094837 | orchestrator | Thursday 05 February 2026 05:32:40 +0000 (0:00:01.075) 0:52:24.976 ***** 2026-02-05 05:32:50.094844 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094851 | orchestrator | 2026-02-05 05:32:50.094858 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 05:32:50.094865 | orchestrator | Thursday 05 February 2026 05:32:41 +0000 (0:00:01.092) 0:52:26.069 ***** 2026-02-05 05:32:50.094873 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094880 | orchestrator | 2026-02-05 05:32:50.094887 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 05:32:50.094894 | orchestrator | Thursday 05 February 2026 05:32:42 +0000 (0:00:01.111) 0:52:27.181 ***** 2026-02-05 05:32:50.094901 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 05:32:50.094909 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 05:32:50.094916 | orchestrator | 2026-02-05 05:32:50.094923 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 05:32:50.094930 | orchestrator | Thursday 05 February 2026 05:32:44 +0000 (0:00:01.821) 0:52:29.002 ***** 2026-02-05 05:32:50.094938 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:32:50.094945 | orchestrator | 2026-02-05 05:32:50.094952 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 05:32:50.094959 | orchestrator | Thursday 05 February 2026 05:32:45 +0000 (0:00:01.461) 0:52:30.464 ***** 2026-02-05 05:32:50.094966 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.094973 | orchestrator | 2026-02-05 05:32:50.094980 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 05:32:50.094987 | orchestrator | Thursday 05 February 2026 05:32:46 +0000 (0:00:01.121) 0:52:31.586 ***** 2026-02-05 05:32:50.094995 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.095002 | orchestrator | 2026-02-05 05:32:50.095009 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:32:50.095016 | orchestrator | Thursday 05 February 2026 05:32:47 +0000 (0:00:01.108) 0:52:32.694 ***** 2026-02-05 05:32:50.095023 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:32:50.095035 | orchestrator | 2026-02-05 05:32:50.095043 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:32:50.095050 | orchestrator | Thursday 05 February 2026 05:32:48 +0000 (0:00:01.103) 0:52:33.798 ***** 2026-02-05 05:32:50.095057 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-05 05:32:50.095064 | orchestrator | 2026-02-05 05:32:50.095072 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 05:32:50.095084 | orchestrator | Thursday 05 February 2026 05:32:50 +0000 (0:00:01.103) 0:52:34.902 ***** 2026-02-05 05:33:36.180110 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:33:36.180212 | orchestrator | 2026-02-05 05:33:36.180225 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 05:33:36.180234 | orchestrator | Thursday 05 February 2026 05:32:51 +0000 (0:00:01.659) 0:52:36.561 ***** 2026-02-05 05:33:36.180243 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 05:33:36.180250 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 05:33:36.180256 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 05:33:36.180263 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180271 | orchestrator | 2026-02-05 05:33:36.180278 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 05:33:36.180284 | orchestrator | Thursday 05 February 2026 05:32:52 +0000 (0:00:01.139) 0:52:37.701 ***** 2026-02-05 05:33:36.180291 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180297 | orchestrator | 2026-02-05 05:33:36.180303 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 05:33:36.180310 | orchestrator | Thursday 05 February 2026 05:32:54 +0000 (0:00:01.168) 0:52:38.870 ***** 2026-02-05 05:33:36.180316 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180323 | orchestrator | 2026-02-05 05:33:36.180329 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 05:33:36.180335 | orchestrator | Thursday 05 February 2026 05:32:55 +0000 (0:00:01.161) 0:52:40.032 ***** 2026-02-05 05:33:36.180341 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180348 | orchestrator | 2026-02-05 05:33:36.180354 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 05:33:36.180360 | orchestrator | Thursday 05 February 2026 05:32:56 +0000 (0:00:01.131) 0:52:41.164 ***** 2026-02-05 05:33:36.180367 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180373 | orchestrator | 2026-02-05 05:33:36.180379 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 05:33:36.180385 | orchestrator | Thursday 05 February 2026 05:32:57 +0000 (0:00:01.118) 0:52:42.282 ***** 2026-02-05 05:33:36.180391 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180397 | orchestrator | 2026-02-05 05:33:36.180404 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:33:36.180410 | orchestrator | Thursday 05 February 2026 05:32:58 +0000 (0:00:01.143) 0:52:43.426 ***** 2026-02-05 05:33:36.180416 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:33:36.180423 | orchestrator | 2026-02-05 05:33:36.180428 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:33:36.180448 | orchestrator | Thursday 05 February 2026 05:33:01 +0000 (0:00:02.515) 0:52:45.941 ***** 2026-02-05 05:33:36.180455 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:33:36.180462 | orchestrator | 2026-02-05 05:33:36.180468 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:33:36.180474 | orchestrator | Thursday 05 February 2026 05:33:02 +0000 (0:00:01.148) 0:52:47.090 ***** 2026-02-05 05:33:36.180480 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-05 05:33:36.180487 | orchestrator | 2026-02-05 05:33:36.180493 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 05:33:36.180499 | orchestrator | Thursday 05 February 2026 05:33:03 +0000 (0:00:01.110) 0:52:48.200 ***** 2026-02-05 05:33:36.180542 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180549 | orchestrator | 2026-02-05 05:33:36.180555 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 05:33:36.180561 | orchestrator | Thursday 05 February 2026 05:33:04 +0000 (0:00:01.124) 0:52:49.325 ***** 2026-02-05 05:33:36.180567 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180574 | orchestrator | 2026-02-05 05:33:36.180580 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 05:33:36.180586 | orchestrator | Thursday 05 February 2026 05:33:05 +0000 (0:00:01.126) 0:52:50.452 ***** 2026-02-05 05:33:36.180592 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180598 | orchestrator | 2026-02-05 05:33:36.180603 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 05:33:36.180608 | orchestrator | Thursday 05 February 2026 05:33:06 +0000 (0:00:01.165) 0:52:51.617 ***** 2026-02-05 05:33:36.180614 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180620 | orchestrator | 2026-02-05 05:33:36.180626 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 05:33:36.180632 | orchestrator | Thursday 05 February 2026 05:33:07 +0000 (0:00:01.138) 0:52:52.755 ***** 2026-02-05 05:33:36.180637 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180642 | orchestrator | 2026-02-05 05:33:36.180648 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 05:33:36.180654 | orchestrator | Thursday 05 February 2026 05:33:09 +0000 (0:00:01.137) 0:52:53.892 ***** 2026-02-05 05:33:36.180660 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180667 | orchestrator | 2026-02-05 05:33:36.180673 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 05:33:36.180679 | orchestrator | Thursday 05 February 2026 05:33:10 +0000 (0:00:01.126) 0:52:55.019 ***** 2026-02-05 05:33:36.180685 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180691 | orchestrator | 2026-02-05 05:33:36.180703 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 05:33:36.180713 | orchestrator | Thursday 05 February 2026 05:33:11 +0000 (0:00:01.170) 0:52:56.190 ***** 2026-02-05 05:33:36.180719 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.180726 | orchestrator | 2026-02-05 05:33:36.180732 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 05:33:36.180739 | orchestrator | Thursday 05 February 2026 05:33:12 +0000 (0:00:01.199) 0:52:57.389 ***** 2026-02-05 05:33:36.180745 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:33:36.180752 | orchestrator | 2026-02-05 05:33:36.180759 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:33:36.180779 | orchestrator | Thursday 05 February 2026 05:33:13 +0000 (0:00:01.187) 0:52:58.577 ***** 2026-02-05 05:33:36.180785 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-05 05:33:36.180793 | orchestrator | 2026-02-05 05:33:36.180799 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 05:33:36.180807 | orchestrator | Thursday 05 February 2026 05:33:14 +0000 (0:00:01.106) 0:52:59.684 ***** 2026-02-05 05:33:36.180815 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-05 05:33:36.180821 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-05 05:33:36.180826 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-05 05:33:36.180832 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-05 05:33:36.180838 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-05 05:33:36.180844 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-05 05:33:36.180850 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-05 05:33:36.180857 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-05 05:33:36.180864 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 05:33:36.180880 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 05:33:36.180888 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 05:33:36.180894 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 05:33:36.180900 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 05:33:36.180907 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 05:33:36.180914 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-05 05:33:36.180920 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-05 05:33:36.180927 | orchestrator | 2026-02-05 05:33:36.180933 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:33:36.180939 | orchestrator | Thursday 05 February 2026 05:33:21 +0000 (0:00:06.690) 0:53:06.374 ***** 2026-02-05 05:33:36.180945 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-05 05:33:36.180952 | orchestrator | 2026-02-05 05:33:36.180959 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-05 05:33:36.180965 | orchestrator | Thursday 05 February 2026 05:33:22 +0000 (0:00:01.100) 0:53:07.475 ***** 2026-02-05 05:33:36.180978 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:33:36.180986 | orchestrator | 2026-02-05 05:33:36.180993 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-05 05:33:36.180999 | orchestrator | Thursday 05 February 2026 05:33:24 +0000 (0:00:01.470) 0:53:08.946 ***** 2026-02-05 05:33:36.181005 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:33:36.181011 | orchestrator | 2026-02-05 05:33:36.181017 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:33:36.181024 | orchestrator | Thursday 05 February 2026 05:33:26 +0000 (0:00:01.941) 0:53:10.887 ***** 2026-02-05 05:33:36.181030 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.181035 | orchestrator | 2026-02-05 05:33:36.181041 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:33:36.181048 | orchestrator | Thursday 05 February 2026 05:33:27 +0000 (0:00:01.095) 0:53:11.983 ***** 2026-02-05 05:33:36.181054 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.181060 | orchestrator | 2026-02-05 05:33:36.181066 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:33:36.181072 | orchestrator | Thursday 05 February 2026 05:33:28 +0000 (0:00:01.099) 0:53:13.083 ***** 2026-02-05 05:33:36.181078 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.181084 | orchestrator | 2026-02-05 05:33:36.181090 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:33:36.181096 | orchestrator | Thursday 05 February 2026 05:33:29 +0000 (0:00:01.125) 0:53:14.208 ***** 2026-02-05 05:33:36.181102 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.181108 | orchestrator | 2026-02-05 05:33:36.181114 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:33:36.181120 | orchestrator | Thursday 05 February 2026 05:33:30 +0000 (0:00:01.118) 0:53:15.327 ***** 2026-02-05 05:33:36.181126 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.181133 | orchestrator | 2026-02-05 05:33:36.181139 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:33:36.181145 | orchestrator | Thursday 05 February 2026 05:33:31 +0000 (0:00:01.175) 0:53:16.502 ***** 2026-02-05 05:33:36.181151 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.181157 | orchestrator | 2026-02-05 05:33:36.181164 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:33:36.181170 | orchestrator | Thursday 05 February 2026 05:33:32 +0000 (0:00:01.110) 0:53:17.613 ***** 2026-02-05 05:33:36.181183 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.181189 | orchestrator | 2026-02-05 05:33:36.181195 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:33:36.181202 | orchestrator | Thursday 05 February 2026 05:33:33 +0000 (0:00:01.110) 0:53:18.724 ***** 2026-02-05 05:33:36.181208 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.181214 | orchestrator | 2026-02-05 05:33:36.181220 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:33:36.181227 | orchestrator | Thursday 05 February 2026 05:33:35 +0000 (0:00:01.151) 0:53:19.875 ***** 2026-02-05 05:33:36.181233 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:33:36.181239 | orchestrator | 2026-02-05 05:33:36.181251 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:34:31.227218 | orchestrator | Thursday 05 February 2026 05:33:36 +0000 (0:00:01.109) 0:53:20.985 ***** 2026-02-05 05:34:31.227364 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:34:31.227377 | orchestrator | 2026-02-05 05:34:31.227386 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:34:31.227394 | orchestrator | Thursday 05 February 2026 05:33:37 +0000 (0:00:01.204) 0:53:22.189 ***** 2026-02-05 05:34:31.227402 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:34:31.227410 | orchestrator | 2026-02-05 05:34:31.227418 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:34:31.227443 | orchestrator | Thursday 05 February 2026 05:33:38 +0000 (0:00:01.126) 0:53:23.316 ***** 2026-02-05 05:34:31.227452 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-05 05:34:31.227460 | orchestrator | 2026-02-05 05:34:31.227468 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:34:31.227476 | orchestrator | Thursday 05 February 2026 05:33:43 +0000 (0:00:04.645) 0:53:27.961 ***** 2026-02-05 05:34:31.227486 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:34:31.227495 | orchestrator | 2026-02-05 05:34:31.227503 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:34:31.227512 | orchestrator | Thursday 05 February 2026 05:33:44 +0000 (0:00:01.150) 0:53:29.111 ***** 2026-02-05 05:34:31.227523 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-05 05:34:31.227536 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-05 05:34:31.227546 | orchestrator | 2026-02-05 05:34:31.227574 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:34:31.227583 | orchestrator | Thursday 05 February 2026 05:33:49 +0000 (0:00:04.919) 0:53:34.031 ***** 2026-02-05 05:34:31.227592 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:34:31.227601 | orchestrator | 2026-02-05 05:34:31.227609 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:34:31.227618 | orchestrator | Thursday 05 February 2026 05:33:50 +0000 (0:00:01.125) 0:53:35.156 ***** 2026-02-05 05:34:31.227627 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:34:31.227635 | orchestrator | 2026-02-05 05:34:31.227644 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:34:31.227653 | orchestrator | Thursday 05 February 2026 05:33:51 +0000 (0:00:01.181) 0:53:36.338 ***** 2026-02-05 05:34:31.227686 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:34:31.227695 | orchestrator | 2026-02-05 05:34:31.227703 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:34:31.227712 | orchestrator | Thursday 05 February 2026 05:33:52 +0000 (0:00:01.115) 0:53:37.454 ***** 2026-02-05 05:34:31.227721 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:34:31.227730 | orchestrator | 2026-02-05 05:34:31.227739 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:34:31.227748 | orchestrator | Thursday 05 February 2026 05:33:53 +0000 (0:00:01.137) 0:53:38.592 ***** 2026-02-05 05:34:31.227758 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:34:31.227767 | orchestrator | 2026-02-05 05:34:31.227777 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:34:31.227785 | orchestrator | Thursday 05 February 2026 05:33:54 +0000 (0:00:01.128) 0:53:39.721 ***** 2026-02-05 05:34:31.227793 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:34:31.227804 | orchestrator | 2026-02-05 05:34:31.227811 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:34:31.227818 | orchestrator | Thursday 05 February 2026 05:33:56 +0000 (0:00:01.262) 0:53:40.983 ***** 2026-02-05 05:34:31.227826 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:34:31.227835 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:34:31.227843 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:34:31.227852 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:34:31.227860 | orchestrator | 2026-02-05 05:34:31.227868 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:34:31.227876 | orchestrator | Thursday 05 February 2026 05:33:57 +0000 (0:00:01.373) 0:53:42.357 ***** 2026-02-05 05:34:31.227884 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:34:31.227893 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:34:31.227901 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:34:31.227909 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:34:31.227917 | orchestrator | 2026-02-05 05:34:31.227925 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:34:31.227934 | orchestrator | Thursday 05 February 2026 05:33:58 +0000 (0:00:01.384) 0:53:43.742 ***** 2026-02-05 05:34:31.227942 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:34:31.227950 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:34:31.227957 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:34:31.227985 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:34:31.227994 | orchestrator | 2026-02-05 05:34:31.228002 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:34:31.228009 | orchestrator | Thursday 05 February 2026 05:34:00 +0000 (0:00:01.387) 0:53:45.129 ***** 2026-02-05 05:34:31.228016 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:34:31.228024 | orchestrator | 2026-02-05 05:34:31.228031 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:34:31.228040 | orchestrator | Thursday 05 February 2026 05:34:01 +0000 (0:00:01.164) 0:53:46.294 ***** 2026-02-05 05:34:31.228047 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 05:34:31.228053 | orchestrator | 2026-02-05 05:34:31.228062 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:34:31.228070 | orchestrator | Thursday 05 February 2026 05:34:02 +0000 (0:00:01.318) 0:53:47.613 ***** 2026-02-05 05:34:31.228077 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:34:31.228084 | orchestrator | 2026-02-05 05:34:31.228091 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-05 05:34:31.228098 | orchestrator | Thursday 05 February 2026 05:34:04 +0000 (0:00:01.744) 0:53:49.357 ***** 2026-02-05 05:34:31.228106 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:34:31.228114 | orchestrator | 2026-02-05 05:34:31.228131 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-05 05:34:31.228139 | orchestrator | Thursday 05 February 2026 05:34:05 +0000 (0:00:01.148) 0:53:50.506 ***** 2026-02-05 05:34:31.228147 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5 2026-02-05 05:34:31.228154 | orchestrator | 2026-02-05 05:34:31.228162 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-05 05:34:31.228169 | orchestrator | Thursday 05 February 2026 05:34:07 +0000 (0:00:01.555) 0:53:52.062 ***** 2026-02-05 05:34:31.228177 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-05 05:34:31.228186 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-05 05:34:31.228194 | orchestrator | 2026-02-05 05:34:31.228203 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-05 05:34:31.228211 | orchestrator | Thursday 05 February 2026 05:34:09 +0000 (0:00:01.870) 0:53:53.932 ***** 2026-02-05 05:34:31.228219 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:34:31.228228 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 05:34:31.228243 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 05:34:31.228251 | orchestrator | 2026-02-05 05:34:31.228260 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:34:31.228267 | orchestrator | Thursday 05 February 2026 05:34:12 +0000 (0:00:03.414) 0:53:57.347 ***** 2026-02-05 05:34:31.228274 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-05 05:34:31.228282 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 05:34:31.228290 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:34:31.228297 | orchestrator | 2026-02-05 05:34:31.228305 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-05 05:34:31.228312 | orchestrator | Thursday 05 February 2026 05:34:14 +0000 (0:00:01.970) 0:53:59.317 ***** 2026-02-05 05:34:31.228319 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:34:31.228327 | orchestrator | 2026-02-05 05:34:31.228334 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-05 05:34:31.228342 | orchestrator | Thursday 05 February 2026 05:34:15 +0000 (0:00:01.498) 0:54:00.815 ***** 2026-02-05 05:34:31.228349 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:34:31.228356 | orchestrator | 2026-02-05 05:34:31.228364 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-05 05:34:31.228371 | orchestrator | Thursday 05 February 2026 05:34:17 +0000 (0:00:01.104) 0:54:01.920 ***** 2026-02-05 05:34:31.228379 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5 2026-02-05 05:34:31.228387 | orchestrator | 2026-02-05 05:34:31.228395 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-05 05:34:31.228402 | orchestrator | Thursday 05 February 2026 05:34:18 +0000 (0:00:01.440) 0:54:03.360 ***** 2026-02-05 05:34:31.228409 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5 2026-02-05 05:34:31.228417 | orchestrator | 2026-02-05 05:34:31.228443 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-05 05:34:31.228451 | orchestrator | Thursday 05 February 2026 05:34:20 +0000 (0:00:01.465) 0:54:04.825 ***** 2026-02-05 05:34:31.228460 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:34:31.228466 | orchestrator | 2026-02-05 05:34:31.228473 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-05 05:34:31.228480 | orchestrator | Thursday 05 February 2026 05:34:22 +0000 (0:00:02.038) 0:54:06.864 ***** 2026-02-05 05:34:31.228488 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:34:31.228495 | orchestrator | 2026-02-05 05:34:31.228502 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-05 05:34:31.228510 | orchestrator | Thursday 05 February 2026 05:34:23 +0000 (0:00:01.908) 0:54:08.772 ***** 2026-02-05 05:34:31.228517 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:34:31.228524 | orchestrator | 2026-02-05 05:34:31.228537 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-05 05:34:31.228544 | orchestrator | Thursday 05 February 2026 05:34:26 +0000 (0:00:02.285) 0:54:11.057 ***** 2026-02-05 05:34:31.228551 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:34:31.228559 | orchestrator | 2026-02-05 05:34:31.228567 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-05 05:34:31.228575 | orchestrator | Thursday 05 February 2026 05:34:28 +0000 (0:00:02.262) 0:54:13.320 ***** 2026-02-05 05:34:31.228583 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:34:31.228590 | orchestrator | 2026-02-05 05:34:31.228597 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-02-05 05:34:31.228604 | orchestrator | Thursday 05 February 2026 05:34:30 +0000 (0:00:01.578) 0:54:14.899 ***** 2026-02-05 05:34:31.228623 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:35:02.286655 | orchestrator | 2026-02-05 05:35:02.286769 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-02-05 05:35:02.286783 | orchestrator | Thursday 05 February 2026 05:34:31 +0000 (0:00:01.137) 0:54:16.037 ***** 2026-02-05 05:35:02.286793 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:35:02.286804 | orchestrator | 2026-02-05 05:35:02.286813 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-02-05 05:35:02.286822 | orchestrator | 2026-02-05 05:35:02.286831 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:35:02.286840 | orchestrator | Thursday 05 February 2026 05:34:39 +0000 (0:00:08.063) 0:54:24.100 ***** 2026-02-05 05:35:02.286849 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4 2026-02-05 05:35:02.286859 | orchestrator | 2026-02-05 05:35:02.286868 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 05:35:02.286877 | orchestrator | Thursday 05 February 2026 05:34:40 +0000 (0:00:01.187) 0:54:25.288 ***** 2026-02-05 05:35:02.286886 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:02.286895 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:02.286903 | orchestrator | 2026-02-05 05:35:02.286912 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 05:35:02.286922 | orchestrator | Thursday 05 February 2026 05:34:42 +0000 (0:00:01.539) 0:54:26.827 ***** 2026-02-05 05:35:02.286931 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:02.286940 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:02.286949 | orchestrator | 2026-02-05 05:35:02.286958 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:35:02.286967 | orchestrator | Thursday 05 February 2026 05:34:43 +0000 (0:00:01.237) 0:54:28.064 ***** 2026-02-05 05:35:02.286976 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:02.286985 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:02.286993 | orchestrator | 2026-02-05 05:35:02.287002 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:35:02.287011 | orchestrator | Thursday 05 February 2026 05:34:44 +0000 (0:00:01.563) 0:54:29.627 ***** 2026-02-05 05:35:02.287020 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:02.287029 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:02.287038 | orchestrator | 2026-02-05 05:35:02.287047 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 05:35:02.287056 | orchestrator | Thursday 05 February 2026 05:34:45 +0000 (0:00:01.184) 0:54:30.812 ***** 2026-02-05 05:35:02.287064 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:02.287073 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:02.287082 | orchestrator | 2026-02-05 05:35:02.287107 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 05:35:02.287117 | orchestrator | Thursday 05 February 2026 05:34:47 +0000 (0:00:01.215) 0:54:32.027 ***** 2026-02-05 05:35:02.287126 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:02.287135 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:02.287143 | orchestrator | 2026-02-05 05:35:02.287152 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 05:35:02.287179 | orchestrator | Thursday 05 February 2026 05:34:48 +0000 (0:00:01.325) 0:54:33.353 ***** 2026-02-05 05:35:02.287189 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:02.287202 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:02.287212 | orchestrator | 2026-02-05 05:35:02.287223 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 05:35:02.287234 | orchestrator | Thursday 05 February 2026 05:34:49 +0000 (0:00:01.218) 0:54:34.571 ***** 2026-02-05 05:35:02.287244 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:02.287255 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:02.287265 | orchestrator | 2026-02-05 05:35:02.287275 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 05:35:02.287285 | orchestrator | Thursday 05 February 2026 05:34:51 +0000 (0:00:01.568) 0:54:36.140 ***** 2026-02-05 05:35:02.287295 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:35:02.287305 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:35:02.287317 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:35:02.287327 | orchestrator | 2026-02-05 05:35:02.287338 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 05:35:02.287349 | orchestrator | Thursday 05 February 2026 05:34:52 +0000 (0:00:01.640) 0:54:37.780 ***** 2026-02-05 05:35:02.287359 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:02.287403 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:02.287421 | orchestrator | 2026-02-05 05:35:02.287431 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 05:35:02.287442 | orchestrator | Thursday 05 February 2026 05:34:54 +0000 (0:00:01.411) 0:54:39.191 ***** 2026-02-05 05:35:02.287453 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:35:02.287463 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:35:02.287473 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:35:02.287484 | orchestrator | 2026-02-05 05:35:02.287495 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 05:35:02.287506 | orchestrator | Thursday 05 February 2026 05:34:57 +0000 (0:00:02.813) 0:54:42.005 ***** 2026-02-05 05:35:02.287517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 05:35:02.287528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 05:35:02.287538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 05:35:02.287549 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:02.287559 | orchestrator | 2026-02-05 05:35:02.287570 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 05:35:02.287581 | orchestrator | Thursday 05 February 2026 05:34:58 +0000 (0:00:01.335) 0:54:43.340 ***** 2026-02-05 05:35:02.287607 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 05:35:02.287619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 05:35:02.287629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 05:35:02.287637 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:02.287647 | orchestrator | 2026-02-05 05:35:02.287656 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 05:35:02.287672 | orchestrator | Thursday 05 February 2026 05:35:00 +0000 (0:00:01.532) 0:54:44.873 ***** 2026-02-05 05:35:02.287683 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:02.287695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:02.287709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:02.287719 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:02.287728 | orchestrator | 2026-02-05 05:35:02.287737 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 05:35:02.287746 | orchestrator | Thursday 05 February 2026 05:35:01 +0000 (0:00:01.096) 0:54:45.969 ***** 2026-02-05 05:35:02.287757 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 05:34:54.936791', 'end': '2026-02-05 05:34:54.985429', 'delta': '0:00:00.048638', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 05:35:02.287768 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 05:34:55.479740', 'end': '2026-02-05 05:34:55.521605', 'delta': '0:00:00.041865', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 05:35:02.287785 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9163e99c5c4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 05:34:55.985299', 'end': '2026-02-05 05:34:56.045160', 'delta': '0:00:00.059861', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9163e99c5c4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 05:35:20.260566 | orchestrator | 2026-02-05 05:35:20.260651 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 05:35:20.260658 | orchestrator | Thursday 05 February 2026 05:35:02 +0000 (0:00:01.127) 0:54:47.097 ***** 2026-02-05 05:35:20.260663 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:20.260668 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:20.260672 | orchestrator | 2026-02-05 05:35:20.260676 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 05:35:20.260681 | orchestrator | Thursday 05 February 2026 05:35:03 +0000 (0:00:01.117) 0:54:48.214 ***** 2026-02-05 05:35:20.260685 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:20.260691 | orchestrator | 2026-02-05 05:35:20.260697 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 05:35:20.260703 | orchestrator | Thursday 05 February 2026 05:35:04 +0000 (0:00:00.990) 0:54:49.205 ***** 2026-02-05 05:35:20.260708 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:20.260712 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:20.260716 | orchestrator | 2026-02-05 05:35:20.260720 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 05:35:20.260724 | orchestrator | Thursday 05 February 2026 05:35:05 +0000 (0:00:01.303) 0:54:50.509 ***** 2026-02-05 05:35:20.260728 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:35:20.260732 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:35:20.260736 | orchestrator | 2026-02-05 05:35:20.260740 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:35:20.260744 | orchestrator | Thursday 05 February 2026 05:35:07 +0000 (0:00:02.278) 0:54:52.788 ***** 2026-02-05 05:35:20.260748 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:20.260751 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:20.260755 | orchestrator | 2026-02-05 05:35:20.260773 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 05:35:20.260777 | orchestrator | Thursday 05 February 2026 05:35:09 +0000 (0:00:01.160) 0:54:53.948 ***** 2026-02-05 05:35:20.260781 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:20.260785 | orchestrator | 2026-02-05 05:35:20.260789 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 05:35:20.260792 | orchestrator | Thursday 05 February 2026 05:35:10 +0000 (0:00:01.083) 0:54:55.032 ***** 2026-02-05 05:35:20.260796 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:20.260800 | orchestrator | 2026-02-05 05:35:20.260804 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:35:20.260807 | orchestrator | Thursday 05 February 2026 05:35:11 +0000 (0:00:01.152) 0:54:56.184 ***** 2026-02-05 05:35:20.260811 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:20.260815 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:20.260819 | orchestrator | 2026-02-05 05:35:20.260823 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 05:35:20.260826 | orchestrator | Thursday 05 February 2026 05:35:12 +0000 (0:00:01.263) 0:54:57.448 ***** 2026-02-05 05:35:20.260830 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:20.260834 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:20.260838 | orchestrator | 2026-02-05 05:35:20.260842 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 05:35:20.260845 | orchestrator | Thursday 05 February 2026 05:35:13 +0000 (0:00:01.157) 0:54:58.606 ***** 2026-02-05 05:35:20.260849 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:20.260853 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:20.260857 | orchestrator | 2026-02-05 05:35:20.260860 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 05:35:20.260864 | orchestrator | Thursday 05 February 2026 05:35:14 +0000 (0:00:01.198) 0:54:59.805 ***** 2026-02-05 05:35:20.260868 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:20.260872 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:20.260889 | orchestrator | 2026-02-05 05:35:20.260893 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 05:35:20.260897 | orchestrator | Thursday 05 February 2026 05:35:16 +0000 (0:00:01.412) 0:55:01.217 ***** 2026-02-05 05:35:20.260901 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:20.260905 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:20.260908 | orchestrator | 2026-02-05 05:35:20.260912 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 05:35:20.260916 | orchestrator | Thursday 05 February 2026 05:35:17 +0000 (0:00:01.228) 0:55:02.445 ***** 2026-02-05 05:35:20.260920 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:20.260924 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:20.260927 | orchestrator | 2026-02-05 05:35:20.260931 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 05:35:20.260935 | orchestrator | Thursday 05 February 2026 05:35:18 +0000 (0:00:01.228) 0:55:03.674 ***** 2026-02-05 05:35:20.260939 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:20.260943 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:20.260946 | orchestrator | 2026-02-05 05:35:20.260950 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 05:35:20.260954 | orchestrator | Thursday 05 February 2026 05:35:20 +0000 (0:00:01.169) 0:55:04.843 ***** 2026-02-05 05:35:20.260959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:20.260976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567', 'dm-uuid-LVM-rm93nYJXJvDmNv1mI2i0aCOQRWUNQlkCoPPr3WLpbHMBKwrxigfqk31Pio1T8A2M'], 'uuids': ['7cbe1ae0-472e-4015-9248-1616ea071c47'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M']}})  2026-02-05 05:35:20.260983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff', 'scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41a73991', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:35:20.260991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-VPbbSc-FYsx-oCa5-EK96-LSd2-FMne-gw3pzp', 'scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2', 'scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e']}})  2026-02-05 05:35:20.261000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:20.261005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:20.261010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:35:20.261015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:20.261024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV', 'dm-uuid-CRYPT-LUKS2-24caf7b252c344f2a02a18860df8d987-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:35:20.370850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:20.370941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e', 'dm-uuid-LVM-gjVz64L0xYhHucIQrbSWO4IaXeskE9njVHEBOKPFjChmvGixI0fMAnchfE228jrV'], 'uuids': ['24caf7b2-52c3-44f2-a02a-18860df8d987'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV']}})  2026-02-05 05:35:20.370952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-30TRfy-AcTU-PjNY-ZSvI-Ms8S-pTLw-T1Q2CW', 'scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b', 'scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567']}})  2026-02-05 05:35:20.370973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:20.370979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:20.370997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5fa98ac', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:35:20.371007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c', 'dm-uuid-LVM-5TLZe1Tgo1TKM8GkjUpfN78ieh5w0ANrQNgi2dmi5diYRe7Lgm9DH3wMJKHbVGFu'], 'uuids': ['4b1d437a-dc47-4238-b645-763e611994c7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu']}})  2026-02-05 05:35:20.371016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:20.371022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a', 'scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '64f88b59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:35:20.371027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:20.371032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-K9GKOz-fxxR-Pm8N-aWMy-HniX-e8kz-eif3cf', 'scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a', 'scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625']}})  2026-02-05 05:35:20.371042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M', 'dm-uuid-CRYPT-LUKS2-7cbe1ae0472e401592481616ea071c47-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:35:21.495135 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:21.495215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:21.495243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:21.495266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:35:21.495275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:21.495281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ', 'dm-uuid-CRYPT-LUKS2-2c590a41d7cb49b2bfdc5ce322fde490-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:35:21.495288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:21.495296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625', 'dm-uuid-LVM-9Y06a2zVor1lRD1cyPlucPXWC0aPbN2JxLYAdcU08G9AXF4NeOKXZ9V1sHvTv2MQ'], 'uuids': ['2c590a41-d7cb-49b2-bfdc-5ce322fde490'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ']}})  2026-02-05 05:35:21.495316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Pz8pQL-5OmI-WkJt-J5Qa-2PBj-Qacj-FgSo8f', 'scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930', 'scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c']}})  2026-02-05 05:35:21.495328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:21.495383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5aaaa4a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:35:21.495397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:21.495408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:35:21.495426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu', 'dm-uuid-CRYPT-LUKS2-4b1d437adc474238b645763e611994c7-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:35:21.698740 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:21.698837 | orchestrator | 2026-02-05 05:35:21.698852 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 05:35:21.698865 | orchestrator | Thursday 05 February 2026 05:35:21 +0000 (0:00:01.465) 0:55:06.309 ***** 2026-02-05 05:35:21.698917 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.698933 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567', 'dm-uuid-LVM-rm93nYJXJvDmNv1mI2i0aCOQRWUNQlkCoPPr3WLpbHMBKwrxigfqk31Pio1T8A2M'], 'uuids': ['7cbe1ae0-472e-4015-9248-1616ea071c47'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.698946 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff', 'scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41a73991', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.698960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-VPbbSc-FYsx-oCa5-EK96-LSd2-FMne-gw3pzp', 'scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2', 'scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.698991 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.699023 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.699036 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.699048 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.699059 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV', 'dm-uuid-CRYPT-LUKS2-24caf7b252c344f2a02a18860df8d987-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.699070 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.699082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.699112 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c', 'dm-uuid-LVM-5TLZe1Tgo1TKM8GkjUpfN78ieh5w0ANrQNgi2dmi5diYRe7Lgm9DH3wMJKHbVGFu'], 'uuids': ['4b1d437a-dc47-4238-b645-763e611994c7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.750118 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e', 'dm-uuid-LVM-gjVz64L0xYhHucIQrbSWO4IaXeskE9njVHEBOKPFjChmvGixI0fMAnchfE228jrV'], 'uuids': ['24caf7b2-52c3-44f2-a02a-18860df8d987'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.750244 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a', 'scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '64f88b59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.750271 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-30TRfy-AcTU-PjNY-ZSvI-Ms8S-pTLw-T1Q2CW', 'scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b', 'scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.750295 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-K9GKOz-fxxR-Pm8N-aWMy-HniX-e8kz-eif3cf', 'scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a', 'scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.750440 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.750459 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5fa98ac', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.750473 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.750499 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.750518 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.850197 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.850304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.850326 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M', 'dm-uuid-CRYPT-LUKS2-7cbe1ae0472e401592481616ea071c47-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.850377 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.850409 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:21.850435 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ', 'dm-uuid-CRYPT-LUKS2-2c590a41d7cb49b2bfdc5ce322fde490-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.850459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.850470 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625', 'dm-uuid-LVM-9Y06a2zVor1lRD1cyPlucPXWC0aPbN2JxLYAdcU08G9AXF4NeOKXZ9V1sHvTv2MQ'], 'uuids': ['2c590a41-d7cb-49b2-bfdc-5ce322fde490'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.850481 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Pz8pQL-5OmI-WkJt-J5Qa-2PBj-Qacj-FgSo8f', 'scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930', 'scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.850492 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:21.850520 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5aaaa4a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:50.507504 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:50.507619 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:50.507630 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu', 'dm-uuid-CRYPT-LUKS2-4b1d437adc474238b645763e611994c7-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:35:50.507656 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:50.507663 | orchestrator | 2026-02-05 05:35:50.507671 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 05:35:50.507685 | orchestrator | Thursday 05 February 2026 05:35:22 +0000 (0:00:01.468) 0:55:07.778 ***** 2026-02-05 05:35:50.507691 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:50.507704 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:50.507709 | orchestrator | 2026-02-05 05:35:50.507714 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 05:35:50.507719 | orchestrator | Thursday 05 February 2026 05:35:24 +0000 (0:00:01.897) 0:55:09.675 ***** 2026-02-05 05:35:50.507724 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:50.507729 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:50.507734 | orchestrator | 2026-02-05 05:35:50.507739 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:35:50.507744 | orchestrator | Thursday 05 February 2026 05:35:26 +0000 (0:00:01.241) 0:55:10.917 ***** 2026-02-05 05:35:50.507750 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:50.507766 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:50.507771 | orchestrator | 2026-02-05 05:35:50.507776 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:35:50.507781 | orchestrator | Thursday 05 February 2026 05:35:27 +0000 (0:00:01.559) 0:55:12.477 ***** 2026-02-05 05:35:50.507786 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:50.507791 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:50.507797 | orchestrator | 2026-02-05 05:35:50.507802 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:35:50.507807 | orchestrator | Thursday 05 February 2026 05:35:28 +0000 (0:00:01.226) 0:55:13.704 ***** 2026-02-05 05:35:50.507812 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:50.507834 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:50.507840 | orchestrator | 2026-02-05 05:35:50.507845 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:35:50.507850 | orchestrator | Thursday 05 February 2026 05:35:30 +0000 (0:00:01.343) 0:55:15.048 ***** 2026-02-05 05:35:50.507855 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:50.507924 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:50.507932 | orchestrator | 2026-02-05 05:35:50.507937 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 05:35:50.507942 | orchestrator | Thursday 05 February 2026 05:35:31 +0000 (0:00:01.228) 0:55:16.276 ***** 2026-02-05 05:35:50.507947 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-05 05:35:50.507953 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-05 05:35:50.507958 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-05 05:35:50.507963 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-05 05:35:50.507968 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-05 05:35:50.507973 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-05 05:35:50.507978 | orchestrator | 2026-02-05 05:35:50.507983 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 05:35:50.507988 | orchestrator | Thursday 05 February 2026 05:35:33 +0000 (0:00:02.106) 0:55:18.383 ***** 2026-02-05 05:35:50.508007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 05:35:50.508013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 05:35:50.508026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 05:35:50.508031 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:50.508037 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 05:35:50.508043 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 05:35:50.508049 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 05:35:50.508055 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:50.508060 | orchestrator | 2026-02-05 05:35:50.508066 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 05:35:50.508072 | orchestrator | Thursday 05 February 2026 05:35:35 +0000 (0:00:01.574) 0:55:19.958 ***** 2026-02-05 05:35:50.508079 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4 2026-02-05 05:35:50.508086 | orchestrator | 2026-02-05 05:35:50.508092 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:35:50.508099 | orchestrator | Thursday 05 February 2026 05:35:36 +0000 (0:00:01.240) 0:55:21.198 ***** 2026-02-05 05:35:50.508105 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:50.508111 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:50.508116 | orchestrator | 2026-02-05 05:35:50.508122 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:35:50.508128 | orchestrator | Thursday 05 February 2026 05:35:37 +0000 (0:00:01.278) 0:55:22.477 ***** 2026-02-05 05:35:50.508134 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:50.508140 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:50.508146 | orchestrator | 2026-02-05 05:35:50.508152 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:35:50.508157 | orchestrator | Thursday 05 February 2026 05:35:38 +0000 (0:00:01.309) 0:55:23.786 ***** 2026-02-05 05:35:50.508162 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:50.508167 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:35:50.508172 | orchestrator | 2026-02-05 05:35:50.508178 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:35:50.508183 | orchestrator | Thursday 05 February 2026 05:35:40 +0000 (0:00:01.241) 0:55:25.028 ***** 2026-02-05 05:35:50.508188 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:50.508193 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:50.508198 | orchestrator | 2026-02-05 05:35:50.508203 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:35:50.508208 | orchestrator | Thursday 05 February 2026 05:35:41 +0000 (0:00:01.354) 0:55:26.382 ***** 2026-02-05 05:35:50.508213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:35:50.508218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:35:50.508223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:35:50.508228 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:50.508233 | orchestrator | 2026-02-05 05:35:50.508238 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:35:50.508243 | orchestrator | Thursday 05 February 2026 05:35:42 +0000 (0:00:01.384) 0:55:27.766 ***** 2026-02-05 05:35:50.508248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:35:50.508253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:35:50.508258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:35:50.508263 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:50.508268 | orchestrator | 2026-02-05 05:35:50.508273 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:35:50.508279 | orchestrator | Thursday 05 February 2026 05:35:44 +0000 (0:00:01.397) 0:55:29.164 ***** 2026-02-05 05:35:50.508287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:35:50.508321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:35:50.508326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:35:50.508332 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:35:50.508337 | orchestrator | 2026-02-05 05:35:50.508342 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:35:50.508347 | orchestrator | Thursday 05 February 2026 05:35:45 +0000 (0:00:01.374) 0:55:30.538 ***** 2026-02-05 05:35:50.508352 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:35:50.508357 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:35:50.508362 | orchestrator | 2026-02-05 05:35:50.508367 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:35:50.508372 | orchestrator | Thursday 05 February 2026 05:35:46 +0000 (0:00:01.252) 0:55:31.791 ***** 2026-02-05 05:35:50.508377 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 05:35:50.508382 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 05:35:50.508387 | orchestrator | 2026-02-05 05:35:50.508392 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 05:35:50.508397 | orchestrator | Thursday 05 February 2026 05:35:48 +0000 (0:00:01.444) 0:55:33.236 ***** 2026-02-05 05:35:50.508403 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:35:50.508408 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:35:50.508413 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:35:50.508418 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 05:35:50.508423 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:35:50.508428 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:35:50.508437 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:36:33.863074 | orchestrator | 2026-02-05 05:36:33.863167 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 05:36:33.863178 | orchestrator | Thursday 05 February 2026 05:35:50 +0000 (0:00:02.076) 0:55:35.312 ***** 2026-02-05 05:36:33.863184 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:36:33.863191 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:36:33.863196 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:36:33.863203 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 05:36:33.863209 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:36:33.863214 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:36:33.863220 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:36:33.863225 | orchestrator | 2026-02-05 05:36:33.863296 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-02-05 05:36:33.863303 | orchestrator | Thursday 05 February 2026 05:35:52 +0000 (0:00:02.505) 0:55:37.818 ***** 2026-02-05 05:36:33.863308 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.863315 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.863320 | orchestrator | 2026-02-05 05:36:33.863326 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:36:33.863331 | orchestrator | Thursday 05 February 2026 05:35:54 +0000 (0:00:01.306) 0:55:39.125 ***** 2026-02-05 05:36:33.863337 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4 2026-02-05 05:36:33.863346 | orchestrator | 2026-02-05 05:36:33.863355 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 05:36:33.863364 | orchestrator | Thursday 05 February 2026 05:35:55 +0000 (0:00:01.189) 0:55:40.315 ***** 2026-02-05 05:36:33.863395 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4 2026-02-05 05:36:33.863403 | orchestrator | 2026-02-05 05:36:33.863412 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 05:36:33.863421 | orchestrator | Thursday 05 February 2026 05:35:56 +0000 (0:00:01.208) 0:55:41.523 ***** 2026-02-05 05:36:33.863431 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.863439 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.863447 | orchestrator | 2026-02-05 05:36:33.863455 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 05:36:33.863464 | orchestrator | Thursday 05 February 2026 05:35:57 +0000 (0:00:01.194) 0:55:42.717 ***** 2026-02-05 05:36:33.863473 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:36:33.863483 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:36:33.863492 | orchestrator | 2026-02-05 05:36:33.863501 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 05:36:33.863510 | orchestrator | Thursday 05 February 2026 05:35:59 +0000 (0:00:01.672) 0:55:44.390 ***** 2026-02-05 05:36:33.863519 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:36:33.863526 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:36:33.863534 | orchestrator | 2026-02-05 05:36:33.863542 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 05:36:33.863551 | orchestrator | Thursday 05 February 2026 05:36:01 +0000 (0:00:01.915) 0:55:46.305 ***** 2026-02-05 05:36:33.863560 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:36:33.863571 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:36:33.863579 | orchestrator | 2026-02-05 05:36:33.863587 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 05:36:33.863596 | orchestrator | Thursday 05 February 2026 05:36:03 +0000 (0:00:01.663) 0:55:47.970 ***** 2026-02-05 05:36:33.863620 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.863630 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.863640 | orchestrator | 2026-02-05 05:36:33.863649 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 05:36:33.863658 | orchestrator | Thursday 05 February 2026 05:36:04 +0000 (0:00:01.182) 0:55:49.152 ***** 2026-02-05 05:36:33.863668 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.863678 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.863687 | orchestrator | 2026-02-05 05:36:33.863697 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 05:36:33.863707 | orchestrator | Thursday 05 February 2026 05:36:05 +0000 (0:00:01.200) 0:55:50.352 ***** 2026-02-05 05:36:33.863716 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.863726 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.863733 | orchestrator | 2026-02-05 05:36:33.863739 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 05:36:33.863746 | orchestrator | Thursday 05 February 2026 05:36:06 +0000 (0:00:01.180) 0:55:51.533 ***** 2026-02-05 05:36:33.863753 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:36:33.863760 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:36:33.863766 | orchestrator | 2026-02-05 05:36:33.863773 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 05:36:33.863779 | orchestrator | Thursday 05 February 2026 05:36:08 +0000 (0:00:01.691) 0:55:53.224 ***** 2026-02-05 05:36:33.863785 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:36:33.863792 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:36:33.863798 | orchestrator | 2026-02-05 05:36:33.863804 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 05:36:33.863811 | orchestrator | Thursday 05 February 2026 05:36:10 +0000 (0:00:01.602) 0:55:54.827 ***** 2026-02-05 05:36:33.863817 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.863823 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.863830 | orchestrator | 2026-02-05 05:36:33.863836 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:36:33.863852 | orchestrator | Thursday 05 February 2026 05:36:11 +0000 (0:00:01.536) 0:55:56.364 ***** 2026-02-05 05:36:33.863859 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.863882 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.863889 | orchestrator | 2026-02-05 05:36:33.863895 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:36:33.863902 | orchestrator | Thursday 05 February 2026 05:36:12 +0000 (0:00:01.221) 0:55:57.585 ***** 2026-02-05 05:36:33.863908 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:36:33.863915 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:36:33.863922 | orchestrator | 2026-02-05 05:36:33.863928 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:36:33.863934 | orchestrator | Thursday 05 February 2026 05:36:13 +0000 (0:00:01.232) 0:55:58.817 ***** 2026-02-05 05:36:33.863941 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:36:33.863947 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:36:33.863953 | orchestrator | 2026-02-05 05:36:33.863959 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:36:33.863966 | orchestrator | Thursday 05 February 2026 05:36:15 +0000 (0:00:01.398) 0:56:00.216 ***** 2026-02-05 05:36:33.863972 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:36:33.863978 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:36:33.863984 | orchestrator | 2026-02-05 05:36:33.863990 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:36:33.863997 | orchestrator | Thursday 05 February 2026 05:36:16 +0000 (0:00:01.230) 0:56:01.446 ***** 2026-02-05 05:36:33.864003 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.864009 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.864015 | orchestrator | 2026-02-05 05:36:33.864020 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:36:33.864026 | orchestrator | Thursday 05 February 2026 05:36:17 +0000 (0:00:01.210) 0:56:02.657 ***** 2026-02-05 05:36:33.864031 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.864036 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.864042 | orchestrator | 2026-02-05 05:36:33.864047 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:36:33.864052 | orchestrator | Thursday 05 February 2026 05:36:19 +0000 (0:00:01.227) 0:56:03.885 ***** 2026-02-05 05:36:33.864058 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.864063 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.864068 | orchestrator | 2026-02-05 05:36:33.864076 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:36:33.864085 | orchestrator | Thursday 05 February 2026 05:36:20 +0000 (0:00:01.162) 0:56:05.048 ***** 2026-02-05 05:36:33.864093 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:36:33.864102 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:36:33.864110 | orchestrator | 2026-02-05 05:36:33.864119 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:36:33.864147 | orchestrator | Thursday 05 February 2026 05:36:21 +0000 (0:00:01.212) 0:56:06.260 ***** 2026-02-05 05:36:33.864157 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:36:33.864165 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:36:33.864174 | orchestrator | 2026-02-05 05:36:33.864183 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:36:33.864192 | orchestrator | Thursday 05 February 2026 05:36:22 +0000 (0:00:01.242) 0:56:07.502 ***** 2026-02-05 05:36:33.864200 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.864209 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.864218 | orchestrator | 2026-02-05 05:36:33.864244 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:36:33.864254 | orchestrator | Thursday 05 February 2026 05:36:23 +0000 (0:00:01.254) 0:56:08.756 ***** 2026-02-05 05:36:33.864263 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.864271 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.864280 | orchestrator | 2026-02-05 05:36:33.864288 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:36:33.864306 | orchestrator | Thursday 05 February 2026 05:36:25 +0000 (0:00:01.213) 0:56:09.970 ***** 2026-02-05 05:36:33.864314 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.864322 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.864330 | orchestrator | 2026-02-05 05:36:33.864347 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:36:33.864356 | orchestrator | Thursday 05 February 2026 05:36:26 +0000 (0:00:01.500) 0:56:11.470 ***** 2026-02-05 05:36:33.864364 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.864373 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.864381 | orchestrator | 2026-02-05 05:36:33.864389 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:36:33.864398 | orchestrator | Thursday 05 February 2026 05:36:27 +0000 (0:00:01.196) 0:56:12.667 ***** 2026-02-05 05:36:33.864407 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.864416 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.864425 | orchestrator | 2026-02-05 05:36:33.864434 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:36:33.864443 | orchestrator | Thursday 05 February 2026 05:36:29 +0000 (0:00:01.199) 0:56:13.866 ***** 2026-02-05 05:36:33.864451 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.864460 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.864469 | orchestrator | 2026-02-05 05:36:33.864478 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:36:33.864488 | orchestrator | Thursday 05 February 2026 05:36:30 +0000 (0:00:01.179) 0:56:15.045 ***** 2026-02-05 05:36:33.864494 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.864500 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.864505 | orchestrator | 2026-02-05 05:36:33.864510 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:36:33.864516 | orchestrator | Thursday 05 February 2026 05:36:31 +0000 (0:00:01.192) 0:56:16.238 ***** 2026-02-05 05:36:33.864521 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.864527 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.864532 | orchestrator | 2026-02-05 05:36:33.864537 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:36:33.864543 | orchestrator | Thursday 05 February 2026 05:36:32 +0000 (0:00:01.199) 0:56:17.438 ***** 2026-02-05 05:36:33.864548 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:36:33.864554 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:36:33.864559 | orchestrator | 2026-02-05 05:36:33.864574 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:37:19.556633 | orchestrator | Thursday 05 February 2026 05:36:33 +0000 (0:00:01.233) 0:56:18.671 ***** 2026-02-05 05:37:19.556746 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.556765 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.556777 | orchestrator | 2026-02-05 05:37:19.556790 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:37:19.556802 | orchestrator | Thursday 05 February 2026 05:36:35 +0000 (0:00:01.518) 0:56:20.189 ***** 2026-02-05 05:37:19.556813 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.556823 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.556834 | orchestrator | 2026-02-05 05:37:19.556845 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:37:19.556856 | orchestrator | Thursday 05 February 2026 05:36:36 +0000 (0:00:01.194) 0:56:21.384 ***** 2026-02-05 05:37:19.556867 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.556878 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.556888 | orchestrator | 2026-02-05 05:37:19.556899 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:37:19.556908 | orchestrator | Thursday 05 February 2026 05:36:37 +0000 (0:00:01.257) 0:56:22.641 ***** 2026-02-05 05:37:19.556918 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:37:19.556952 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:37:19.556963 | orchestrator | 2026-02-05 05:37:19.556975 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:37:19.556985 | orchestrator | Thursday 05 February 2026 05:36:39 +0000 (0:00:02.143) 0:56:24.784 ***** 2026-02-05 05:37:19.556996 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:37:19.557007 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:37:19.557018 | orchestrator | 2026-02-05 05:37:19.557029 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:37:19.557039 | orchestrator | Thursday 05 February 2026 05:36:42 +0000 (0:00:02.328) 0:56:27.113 ***** 2026-02-05 05:37:19.557051 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4 2026-02-05 05:37:19.557062 | orchestrator | 2026-02-05 05:37:19.557073 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 05:37:19.557083 | orchestrator | Thursday 05 February 2026 05:36:43 +0000 (0:00:01.443) 0:56:28.557 ***** 2026-02-05 05:37:19.557155 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.557166 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.557224 | orchestrator | 2026-02-05 05:37:19.557236 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 05:37:19.557248 | orchestrator | Thursday 05 February 2026 05:36:45 +0000 (0:00:01.276) 0:56:29.833 ***** 2026-02-05 05:37:19.557260 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.557272 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.557283 | orchestrator | 2026-02-05 05:37:19.557295 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 05:37:19.557308 | orchestrator | Thursday 05 February 2026 05:36:46 +0000 (0:00:01.343) 0:56:31.177 ***** 2026-02-05 05:37:19.557319 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 05:37:19.557331 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 05:37:19.557344 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 05:37:19.557356 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 05:37:19.557367 | orchestrator | 2026-02-05 05:37:19.557379 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 05:37:19.557391 | orchestrator | Thursday 05 February 2026 05:36:48 +0000 (0:00:01.942) 0:56:33.120 ***** 2026-02-05 05:37:19.557402 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:37:19.557413 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:37:19.557424 | orchestrator | 2026-02-05 05:37:19.557450 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 05:37:19.557461 | orchestrator | Thursday 05 February 2026 05:36:49 +0000 (0:00:01.526) 0:56:34.647 ***** 2026-02-05 05:37:19.557472 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.557483 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.557493 | orchestrator | 2026-02-05 05:37:19.557504 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 05:37:19.557515 | orchestrator | Thursday 05 February 2026 05:36:51 +0000 (0:00:01.241) 0:56:35.889 ***** 2026-02-05 05:37:19.557526 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.557536 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.557547 | orchestrator | 2026-02-05 05:37:19.557558 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:37:19.557568 | orchestrator | Thursday 05 February 2026 05:36:52 +0000 (0:00:01.491) 0:56:37.380 ***** 2026-02-05 05:37:19.557579 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.557589 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.557600 | orchestrator | 2026-02-05 05:37:19.557610 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:37:19.557621 | orchestrator | Thursday 05 February 2026 05:36:53 +0000 (0:00:01.254) 0:56:38.635 ***** 2026-02-05 05:37:19.557641 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4 2026-02-05 05:37:19.557650 | orchestrator | 2026-02-05 05:37:19.557660 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 05:37:19.557669 | orchestrator | Thursday 05 February 2026 05:36:55 +0000 (0:00:01.208) 0:56:39.844 ***** 2026-02-05 05:37:19.557678 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:37:19.557688 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:37:19.557697 | orchestrator | 2026-02-05 05:37:19.557706 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 05:37:19.557718 | orchestrator | Thursday 05 February 2026 05:36:57 +0000 (0:00:02.785) 0:56:42.630 ***** 2026-02-05 05:37:19.557729 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 05:37:19.557758 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 05:37:19.557770 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 05:37:19.557780 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.557790 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 05:37:19.557799 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 05:37:19.557808 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 05:37:19.557817 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.557826 | orchestrator | 2026-02-05 05:37:19.557835 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 05:37:19.557844 | orchestrator | Thursday 05 February 2026 05:36:59 +0000 (0:00:01.268) 0:56:43.898 ***** 2026-02-05 05:37:19.557853 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.557863 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.557873 | orchestrator | 2026-02-05 05:37:19.557882 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 05:37:19.557891 | orchestrator | Thursday 05 February 2026 05:37:00 +0000 (0:00:01.303) 0:56:45.202 ***** 2026-02-05 05:37:19.557901 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.557910 | orchestrator | 2026-02-05 05:37:19.557919 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 05:37:19.557928 | orchestrator | Thursday 05 February 2026 05:37:01 +0000 (0:00:01.174) 0:56:46.377 ***** 2026-02-05 05:37:19.557937 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.557946 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.557954 | orchestrator | 2026-02-05 05:37:19.557963 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 05:37:19.557971 | orchestrator | Thursday 05 February 2026 05:37:02 +0000 (0:00:01.242) 0:56:47.620 ***** 2026-02-05 05:37:19.557979 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.557988 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.557995 | orchestrator | 2026-02-05 05:37:19.558003 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 05:37:19.558065 | orchestrator | Thursday 05 February 2026 05:37:04 +0000 (0:00:01.252) 0:56:48.873 ***** 2026-02-05 05:37:19.558077 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.558085 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.558094 | orchestrator | 2026-02-05 05:37:19.558102 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:37:19.558110 | orchestrator | Thursday 05 February 2026 05:37:05 +0000 (0:00:01.204) 0:56:50.078 ***** 2026-02-05 05:37:19.558118 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:37:19.558127 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:37:19.558135 | orchestrator | 2026-02-05 05:37:19.558144 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:37:19.558152 | orchestrator | Thursday 05 February 2026 05:37:07 +0000 (0:00:02.653) 0:56:52.731 ***** 2026-02-05 05:37:19.558161 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:37:19.558298 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:37:19.558312 | orchestrator | 2026-02-05 05:37:19.558321 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:37:19.558330 | orchestrator | Thursday 05 February 2026 05:37:09 +0000 (0:00:01.202) 0:56:53.934 ***** 2026-02-05 05:37:19.558338 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4 2026-02-05 05:37:19.558348 | orchestrator | 2026-02-05 05:37:19.558356 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 05:37:19.558365 | orchestrator | Thursday 05 February 2026 05:37:10 +0000 (0:00:01.383) 0:56:55.318 ***** 2026-02-05 05:37:19.558374 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.558383 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.558392 | orchestrator | 2026-02-05 05:37:19.558402 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 05:37:19.558411 | orchestrator | Thursday 05 February 2026 05:37:11 +0000 (0:00:01.219) 0:56:56.538 ***** 2026-02-05 05:37:19.558419 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.558428 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.558437 | orchestrator | 2026-02-05 05:37:19.558446 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 05:37:19.558456 | orchestrator | Thursday 05 February 2026 05:37:12 +0000 (0:00:01.214) 0:56:57.752 ***** 2026-02-05 05:37:19.558465 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.558475 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.558484 | orchestrator | 2026-02-05 05:37:19.558494 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 05:37:19.558503 | orchestrator | Thursday 05 February 2026 05:37:14 +0000 (0:00:01.251) 0:56:59.004 ***** 2026-02-05 05:37:19.558513 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.558522 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.558532 | orchestrator | 2026-02-05 05:37:19.558541 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 05:37:19.558550 | orchestrator | Thursday 05 February 2026 05:37:15 +0000 (0:00:01.230) 0:57:00.235 ***** 2026-02-05 05:37:19.558560 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.558569 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.558578 | orchestrator | 2026-02-05 05:37:19.558587 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 05:37:19.558597 | orchestrator | Thursday 05 February 2026 05:37:17 +0000 (0:00:01.657) 0:57:01.892 ***** 2026-02-05 05:37:19.558606 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.558616 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.558625 | orchestrator | 2026-02-05 05:37:19.558634 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 05:37:19.558644 | orchestrator | Thursday 05 February 2026 05:37:18 +0000 (0:00:01.238) 0:57:03.131 ***** 2026-02-05 05:37:19.558653 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:37:19.558673 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:37:19.558692 | orchestrator | 2026-02-05 05:37:19.558714 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 05:38:00.111698 | orchestrator | Thursday 05 February 2026 05:37:19 +0000 (0:00:01.235) 0:57:04.366 ***** 2026-02-05 05:38:00.111835 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:00.111848 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:00.111856 | orchestrator | 2026-02-05 05:38:00.111867 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 05:38:00.111882 | orchestrator | Thursday 05 February 2026 05:37:20 +0000 (0:00:01.248) 0:57:05.615 ***** 2026-02-05 05:38:00.111894 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:38:00.111904 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:38:00.111913 | orchestrator | 2026-02-05 05:38:00.111924 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:38:00.111933 | orchestrator | Thursday 05 February 2026 05:37:21 +0000 (0:00:01.203) 0:57:06.819 ***** 2026-02-05 05:38:00.111965 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4 2026-02-05 05:38:00.111974 | orchestrator | 2026-02-05 05:38:00.111984 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 05:38:00.111993 | orchestrator | Thursday 05 February 2026 05:37:23 +0000 (0:00:01.192) 0:57:08.011 ***** 2026-02-05 05:38:00.112003 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-05 05:38:00.112013 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-05 05:38:00.112023 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-05 05:38:00.112033 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-05 05:38:00.112043 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-05 05:38:00.112050 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-05 05:38:00.112056 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-05 05:38:00.112062 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-05 05:38:00.112068 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-05 05:38:00.112074 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-05 05:38:00.112080 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-05 05:38:00.112086 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-05 05:38:00.112092 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-05 05:38:00.112097 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-05 05:38:00.112103 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-05 05:38:00.112109 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 05:38:00.112116 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-05 05:38:00.112121 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 05:38:00.112178 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 05:38:00.112193 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 05:38:00.112203 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 05:38:00.112212 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 05:38:00.112271 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 05:38:00.112283 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 05:38:00.112292 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 05:38:00.112302 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 05:38:00.112311 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 05:38:00.112325 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-05 05:38:00.112336 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 05:38:00.112346 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-05 05:38:00.112357 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-05 05:38:00.112367 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-05 05:38:00.112377 | orchestrator | 2026-02-05 05:38:00.112385 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:38:00.112395 | orchestrator | Thursday 05 February 2026 05:37:30 +0000 (0:00:07.245) 0:57:15.257 ***** 2026-02-05 05:38:00.112405 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4 2026-02-05 05:38:00.112416 | orchestrator | 2026-02-05 05:38:00.112425 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-05 05:38:00.112434 | orchestrator | Thursday 05 February 2026 05:37:31 +0000 (0:00:01.223) 0:57:16.480 ***** 2026-02-05 05:38:00.112455 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:38:00.112468 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:38:00.112478 | orchestrator | 2026-02-05 05:38:00.112489 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-05 05:38:00.112498 | orchestrator | Thursday 05 February 2026 05:37:33 +0000 (0:00:01.588) 0:57:18.069 ***** 2026-02-05 05:38:00.112508 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:38:00.112518 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:38:00.112529 | orchestrator | 2026-02-05 05:38:00.112539 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:38:00.112567 | orchestrator | Thursday 05 February 2026 05:37:35 +0000 (0:00:02.091) 0:57:20.160 ***** 2026-02-05 05:38:00.112574 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:00.112582 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:00.112589 | orchestrator | 2026-02-05 05:38:00.112596 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:38:00.112603 | orchestrator | Thursday 05 February 2026 05:37:36 +0000 (0:00:01.192) 0:57:21.353 ***** 2026-02-05 05:38:00.112610 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:00.112619 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:00.112629 | orchestrator | 2026-02-05 05:38:00.112637 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:38:00.112643 | orchestrator | Thursday 05 February 2026 05:37:37 +0000 (0:00:01.256) 0:57:22.610 ***** 2026-02-05 05:38:00.112653 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:00.112660 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:00.112666 | orchestrator | 2026-02-05 05:38:00.112672 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:38:00.112677 | orchestrator | Thursday 05 February 2026 05:37:39 +0000 (0:00:01.236) 0:57:23.846 ***** 2026-02-05 05:38:00.112683 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:00.112689 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:00.112695 | orchestrator | 2026-02-05 05:38:00.112701 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:38:00.112707 | orchestrator | Thursday 05 February 2026 05:37:40 +0000 (0:00:01.194) 0:57:25.040 ***** 2026-02-05 05:38:00.112712 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:00.112718 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:00.112724 | orchestrator | 2026-02-05 05:38:00.112730 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:38:00.112736 | orchestrator | Thursday 05 February 2026 05:37:41 +0000 (0:00:01.203) 0:57:26.244 ***** 2026-02-05 05:38:00.112741 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:00.112747 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:00.112753 | orchestrator | 2026-02-05 05:38:00.112759 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:38:00.112765 | orchestrator | Thursday 05 February 2026 05:37:42 +0000 (0:00:01.203) 0:57:27.448 ***** 2026-02-05 05:38:00.112770 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:00.112776 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:00.112782 | orchestrator | 2026-02-05 05:38:00.112788 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:38:00.112794 | orchestrator | Thursday 05 February 2026 05:37:43 +0000 (0:00:01.322) 0:57:28.770 ***** 2026-02-05 05:38:00.112800 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:00.112805 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:00.112811 | orchestrator | 2026-02-05 05:38:00.112817 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:38:00.112829 | orchestrator | Thursday 05 February 2026 05:37:45 +0000 (0:00:01.185) 0:57:29.956 ***** 2026-02-05 05:38:00.112835 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:00.112840 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:00.112846 | orchestrator | 2026-02-05 05:38:00.112852 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:38:00.112858 | orchestrator | Thursday 05 February 2026 05:37:46 +0000 (0:00:01.259) 0:57:31.215 ***** 2026-02-05 05:38:00.112864 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:00.112869 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:00.112875 | orchestrator | 2026-02-05 05:38:00.112881 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:38:00.112887 | orchestrator | Thursday 05 February 2026 05:37:47 +0000 (0:00:01.324) 0:57:32.539 ***** 2026-02-05 05:38:00.112893 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:00.112903 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:00.112909 | orchestrator | 2026-02-05 05:38:00.112915 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:38:00.112920 | orchestrator | Thursday 05 February 2026 05:37:48 +0000 (0:00:01.225) 0:57:33.765 ***** 2026-02-05 05:38:00.112926 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-05 05:38:00.112932 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-05 05:38:00.112938 | orchestrator | 2026-02-05 05:38:00.112944 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:38:00.112949 | orchestrator | Thursday 05 February 2026 05:37:53 +0000 (0:00:04.855) 0:57:38.621 ***** 2026-02-05 05:38:00.112955 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:38:00.112961 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:38:00.112967 | orchestrator | 2026-02-05 05:38:00.112973 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:38:00.112979 | orchestrator | Thursday 05 February 2026 05:37:55 +0000 (0:00:01.246) 0:57:39.867 ***** 2026-02-05 05:38:00.112986 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-05 05:38:00.113000 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-05 05:38:48.493281 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-05 05:38:48.493396 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-05 05:38:48.493413 | orchestrator | 2026-02-05 05:38:48.493431 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:38:48.493452 | orchestrator | Thursday 05 February 2026 05:38:00 +0000 (0:00:05.054) 0:57:44.921 ***** 2026-02-05 05:38:48.493509 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:48.493533 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:48.493551 | orchestrator | 2026-02-05 05:38:48.493570 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:38:48.493586 | orchestrator | Thursday 05 February 2026 05:38:01 +0000 (0:00:01.230) 0:57:46.152 ***** 2026-02-05 05:38:48.493606 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:48.493624 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:48.493642 | orchestrator | 2026-02-05 05:38:48.493661 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:38:48.493682 | orchestrator | Thursday 05 February 2026 05:38:02 +0000 (0:00:01.468) 0:57:47.621 ***** 2026-02-05 05:38:48.493700 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:48.493719 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:48.493737 | orchestrator | 2026-02-05 05:38:48.493752 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:38:48.493763 | orchestrator | Thursday 05 February 2026 05:38:04 +0000 (0:00:01.314) 0:57:48.935 ***** 2026-02-05 05:38:48.493774 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:48.493784 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:48.493795 | orchestrator | 2026-02-05 05:38:48.493806 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:38:48.493820 | orchestrator | Thursday 05 February 2026 05:38:05 +0000 (0:00:01.275) 0:57:50.210 ***** 2026-02-05 05:38:48.493832 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:48.493848 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:48.493872 | orchestrator | 2026-02-05 05:38:48.493898 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:38:48.493918 | orchestrator | Thursday 05 February 2026 05:38:06 +0000 (0:00:01.265) 0:57:51.476 ***** 2026-02-05 05:38:48.493936 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:38:48.493955 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:38:48.493972 | orchestrator | 2026-02-05 05:38:48.493989 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:38:48.494007 | orchestrator | Thursday 05 February 2026 05:38:08 +0000 (0:00:01.369) 0:57:52.846 ***** 2026-02-05 05:38:48.494130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:38:48.494154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:38:48.494191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:38:48.494211 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:48.494229 | orchestrator | 2026-02-05 05:38:48.494247 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:38:48.494266 | orchestrator | Thursday 05 February 2026 05:38:09 +0000 (0:00:01.412) 0:57:54.259 ***** 2026-02-05 05:38:48.494286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:38:48.494304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:38:48.494324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:38:48.494342 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:48.494360 | orchestrator | 2026-02-05 05:38:48.494378 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:38:48.494390 | orchestrator | Thursday 05 February 2026 05:38:11 +0000 (0:00:01.708) 0:57:55.968 ***** 2026-02-05 05:38:48.494401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:38:48.494412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:38:48.494439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:38:48.494450 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:48.494461 | orchestrator | 2026-02-05 05:38:48.494471 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:38:48.494482 | orchestrator | Thursday 05 February 2026 05:38:12 +0000 (0:00:01.690) 0:57:57.659 ***** 2026-02-05 05:38:48.494506 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:38:48.494517 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:38:48.494528 | orchestrator | 2026-02-05 05:38:48.494539 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:38:48.494549 | orchestrator | Thursday 05 February 2026 05:38:14 +0000 (0:00:01.601) 0:57:59.260 ***** 2026-02-05 05:38:48.494566 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 05:38:48.494584 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 05:38:48.494595 | orchestrator | 2026-02-05 05:38:48.494606 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:38:48.494617 | orchestrator | Thursday 05 February 2026 05:38:15 +0000 (0:00:01.413) 0:58:00.673 ***** 2026-02-05 05:38:48.494628 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:38:48.494641 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:38:48.494660 | orchestrator | 2026-02-05 05:38:48.494715 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-05 05:38:48.494738 | orchestrator | Thursday 05 February 2026 05:38:17 +0000 (0:00:01.869) 0:58:02.543 ***** 2026-02-05 05:38:48.494754 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:48.494772 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:48.494788 | orchestrator | 2026-02-05 05:38:48.494807 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-05 05:38:48.494825 | orchestrator | Thursday 05 February 2026 05:38:18 +0000 (0:00:01.247) 0:58:03.790 ***** 2026-02-05 05:38:48.494844 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4 2026-02-05 05:38:48.494864 | orchestrator | 2026-02-05 05:38:48.494883 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-05 05:38:48.494901 | orchestrator | Thursday 05 February 2026 05:38:20 +0000 (0:00:01.358) 0:58:05.149 ***** 2026-02-05 05:38:48.494920 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-05 05:38:48.494948 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-05 05:38:48.494968 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-05 05:38:48.494985 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-05 05:38:48.495003 | orchestrator | 2026-02-05 05:38:48.495020 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-05 05:38:48.495038 | orchestrator | Thursday 05 February 2026 05:38:22 +0000 (0:00:01.954) 0:58:07.103 ***** 2026-02-05 05:38:48.495056 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:38:48.495075 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 05:38:48.495165 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 05:38:48.495178 | orchestrator | 2026-02-05 05:38:48.495190 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:38:48.495201 | orchestrator | Thursday 05 February 2026 05:38:25 +0000 (0:00:03.285) 0:58:10.388 ***** 2026-02-05 05:38:48.495211 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-05 05:38:48.495222 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 05:38:48.495233 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:38:48.495244 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-05 05:38:48.495255 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 05:38:48.495266 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:38:48.495277 | orchestrator | 2026-02-05 05:38:48.495288 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-05 05:38:48.495299 | orchestrator | Thursday 05 February 2026 05:38:27 +0000 (0:00:02.096) 0:58:12.485 ***** 2026-02-05 05:38:48.495310 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:38:48.495321 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:38:48.495332 | orchestrator | 2026-02-05 05:38:48.495343 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-05 05:38:48.495367 | orchestrator | Thursday 05 February 2026 05:38:29 +0000 (0:00:01.593) 0:58:14.078 ***** 2026-02-05 05:38:48.495378 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:48.495389 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:38:48.495401 | orchestrator | 2026-02-05 05:38:48.495412 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-05 05:38:48.495424 | orchestrator | Thursday 05 February 2026 05:38:30 +0000 (0:00:01.245) 0:58:15.324 ***** 2026-02-05 05:38:48.495435 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4 2026-02-05 05:38:48.495447 | orchestrator | 2026-02-05 05:38:48.495468 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-05 05:38:48.495479 | orchestrator | Thursday 05 February 2026 05:38:31 +0000 (0:00:01.346) 0:58:16.671 ***** 2026-02-05 05:38:48.495490 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4 2026-02-05 05:38:48.495501 | orchestrator | 2026-02-05 05:38:48.495513 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-05 05:38:48.495524 | orchestrator | Thursday 05 February 2026 05:38:33 +0000 (0:00:01.217) 0:58:17.889 ***** 2026-02-05 05:38:48.495535 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:38:48.495547 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:38:48.495558 | orchestrator | 2026-02-05 05:38:48.495569 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-05 05:38:48.495581 | orchestrator | Thursday 05 February 2026 05:38:35 +0000 (0:00:02.149) 0:58:20.038 ***** 2026-02-05 05:38:48.495592 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:38:48.495603 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:38:48.495614 | orchestrator | 2026-02-05 05:38:48.495626 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-05 05:38:48.495637 | orchestrator | Thursday 05 February 2026 05:38:37 +0000 (0:00:02.039) 0:58:22.078 ***** 2026-02-05 05:38:48.495648 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:38:48.495659 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:38:48.495671 | orchestrator | 2026-02-05 05:38:48.495682 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-05 05:38:48.495694 | orchestrator | Thursday 05 February 2026 05:38:39 +0000 (0:00:02.333) 0:58:24.412 ***** 2026-02-05 05:38:48.495705 | orchestrator | changed: [testbed-node-3] 2026-02-05 05:38:48.495717 | orchestrator | changed: [testbed-node-4] 2026-02-05 05:38:48.495728 | orchestrator | 2026-02-05 05:38:48.495739 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-05 05:38:48.495751 | orchestrator | Thursday 05 February 2026 05:38:43 +0000 (0:00:03.598) 0:58:28.010 ***** 2026-02-05 05:38:48.495762 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:38:48.495773 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:38:48.495785 | orchestrator | 2026-02-05 05:38:48.495796 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-02-05 05:38:48.495808 | orchestrator | Thursday 05 February 2026 05:38:44 +0000 (0:00:01.764) 0:58:29.775 ***** 2026-02-05 05:38:48.495819 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:38:48.495844 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:39:11.051180 | orchestrator | 2026-02-05 05:39:11.051259 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-05 05:39:11.051266 | orchestrator | 2026-02-05 05:39:11.051270 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:39:11.051275 | orchestrator | Thursday 05 February 2026 05:38:48 +0000 (0:00:03.524) 0:58:33.299 ***** 2026-02-05 05:39:11.051279 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-05 05:39:11.051283 | orchestrator | 2026-02-05 05:39:11.051287 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 05:39:11.051291 | orchestrator | Thursday 05 February 2026 05:38:49 +0000 (0:00:01.113) 0:58:34.413 ***** 2026-02-05 05:39:11.051295 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:11.051327 | orchestrator | 2026-02-05 05:39:11.051332 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 05:39:11.051336 | orchestrator | Thursday 05 February 2026 05:38:51 +0000 (0:00:01.514) 0:58:35.927 ***** 2026-02-05 05:39:11.051340 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:11.051343 | orchestrator | 2026-02-05 05:39:11.051347 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:39:11.051351 | orchestrator | Thursday 05 February 2026 05:38:52 +0000 (0:00:01.101) 0:58:37.028 ***** 2026-02-05 05:39:11.051355 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:11.051359 | orchestrator | 2026-02-05 05:39:11.051363 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:39:11.051367 | orchestrator | Thursday 05 February 2026 05:38:53 +0000 (0:00:01.512) 0:58:38.541 ***** 2026-02-05 05:39:11.051370 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:11.051374 | orchestrator | 2026-02-05 05:39:11.051378 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 05:39:11.051382 | orchestrator | Thursday 05 February 2026 05:38:54 +0000 (0:00:01.164) 0:58:39.706 ***** 2026-02-05 05:39:11.051385 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:11.051389 | orchestrator | 2026-02-05 05:39:11.051393 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 05:39:11.051397 | orchestrator | Thursday 05 February 2026 05:38:56 +0000 (0:00:01.134) 0:58:40.841 ***** 2026-02-05 05:39:11.051400 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:11.051404 | orchestrator | 2026-02-05 05:39:11.051408 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 05:39:11.051413 | orchestrator | Thursday 05 February 2026 05:38:57 +0000 (0:00:01.156) 0:58:41.998 ***** 2026-02-05 05:39:11.051417 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:11.051421 | orchestrator | 2026-02-05 05:39:11.051425 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 05:39:11.051428 | orchestrator | Thursday 05 February 2026 05:38:58 +0000 (0:00:01.135) 0:58:43.133 ***** 2026-02-05 05:39:11.051432 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:11.051436 | orchestrator | 2026-02-05 05:39:11.051440 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 05:39:11.051444 | orchestrator | Thursday 05 February 2026 05:38:59 +0000 (0:00:01.120) 0:58:44.254 ***** 2026-02-05 05:39:11.051448 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:39:11.051452 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:39:11.051455 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:39:11.051459 | orchestrator | 2026-02-05 05:39:11.051463 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 05:39:11.051477 | orchestrator | Thursday 05 February 2026 05:39:01 +0000 (0:00:01.703) 0:58:45.957 ***** 2026-02-05 05:39:11.051481 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:11.051485 | orchestrator | 2026-02-05 05:39:11.051489 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 05:39:11.051493 | orchestrator | Thursday 05 February 2026 05:39:02 +0000 (0:00:01.244) 0:58:47.201 ***** 2026-02-05 05:39:11.051496 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:39:11.051500 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:39:11.051504 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:39:11.051507 | orchestrator | 2026-02-05 05:39:11.051511 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 05:39:11.051515 | orchestrator | Thursday 05 February 2026 05:39:05 +0000 (0:00:02.904) 0:58:50.106 ***** 2026-02-05 05:39:11.051519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 05:39:11.051527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 05:39:11.051530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 05:39:11.051534 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:11.051538 | orchestrator | 2026-02-05 05:39:11.051542 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 05:39:11.051545 | orchestrator | Thursday 05 February 2026 05:39:06 +0000 (0:00:01.431) 0:58:51.537 ***** 2026-02-05 05:39:11.051551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 05:39:11.051556 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 05:39:11.051570 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 05:39:11.051574 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:11.051578 | orchestrator | 2026-02-05 05:39:11.051582 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 05:39:11.051586 | orchestrator | Thursday 05 February 2026 05:39:08 +0000 (0:00:01.972) 0:58:53.510 ***** 2026-02-05 05:39:11.051591 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:11.051598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:11.051602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:11.051606 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:11.051610 | orchestrator | 2026-02-05 05:39:11.051614 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 05:39:11.051617 | orchestrator | Thursday 05 February 2026 05:39:09 +0000 (0:00:01.160) 0:58:54.670 ***** 2026-02-05 05:39:11.051625 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 05:39:02.933145', 'end': '2026-02-05 05:39:02.986590', 'delta': '0:00:00.053445', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 05:39:11.051636 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 05:39:03.514640', 'end': '2026-02-05 05:39:03.560939', 'delta': '0:00:00.046299', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 05:39:11.051640 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9163e99c5c4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 05:39:04.106125', 'end': '2026-02-05 05:39:04.162423', 'delta': '0:00:00.056298', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9163e99c5c4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 05:39:11.051644 | orchestrator | 2026-02-05 05:39:11.051650 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 05:39:28.802713 | orchestrator | Thursday 05 February 2026 05:39:11 +0000 (0:00:01.188) 0:58:55.858 ***** 2026-02-05 05:39:28.802832 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:28.802878 | orchestrator | 2026-02-05 05:39:28.802893 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 05:39:28.802905 | orchestrator | Thursday 05 February 2026 05:39:12 +0000 (0:00:01.269) 0:58:57.128 ***** 2026-02-05 05:39:28.802916 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:28.802929 | orchestrator | 2026-02-05 05:39:28.802941 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 05:39:28.802952 | orchestrator | Thursday 05 February 2026 05:39:13 +0000 (0:00:01.649) 0:58:58.777 ***** 2026-02-05 05:39:28.802963 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:28.802974 | orchestrator | 2026-02-05 05:39:28.802996 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 05:39:28.803007 | orchestrator | Thursday 05 February 2026 05:39:15 +0000 (0:00:01.125) 0:58:59.903 ***** 2026-02-05 05:39:28.803029 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:39:28.803040 | orchestrator | 2026-02-05 05:39:28.803080 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:39:28.803091 | orchestrator | Thursday 05 February 2026 05:39:17 +0000 (0:00:02.054) 0:59:01.958 ***** 2026-02-05 05:39:28.803102 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:28.803113 | orchestrator | 2026-02-05 05:39:28.803124 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 05:39:28.803135 | orchestrator | Thursday 05 February 2026 05:39:18 +0000 (0:00:01.162) 0:59:03.120 ***** 2026-02-05 05:39:28.803146 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:28.803157 | orchestrator | 2026-02-05 05:39:28.803168 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 05:39:28.803179 | orchestrator | Thursday 05 February 2026 05:39:19 +0000 (0:00:01.143) 0:59:04.264 ***** 2026-02-05 05:39:28.803190 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:28.803201 | orchestrator | 2026-02-05 05:39:28.803212 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:39:28.803223 | orchestrator | Thursday 05 February 2026 05:39:20 +0000 (0:00:01.231) 0:59:05.496 ***** 2026-02-05 05:39:28.803234 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:28.803352 | orchestrator | 2026-02-05 05:39:28.803367 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 05:39:28.803378 | orchestrator | Thursday 05 February 2026 05:39:21 +0000 (0:00:01.127) 0:59:06.623 ***** 2026-02-05 05:39:28.803389 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:28.803400 | orchestrator | 2026-02-05 05:39:28.803411 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 05:39:28.803422 | orchestrator | Thursday 05 February 2026 05:39:22 +0000 (0:00:01.108) 0:59:07.731 ***** 2026-02-05 05:39:28.803433 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:28.803444 | orchestrator | 2026-02-05 05:39:28.803456 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 05:39:28.803467 | orchestrator | Thursday 05 February 2026 05:39:24 +0000 (0:00:01.181) 0:59:08.913 ***** 2026-02-05 05:39:28.803478 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:28.803489 | orchestrator | 2026-02-05 05:39:28.803500 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 05:39:28.803511 | orchestrator | Thursday 05 February 2026 05:39:25 +0000 (0:00:01.101) 0:59:10.015 ***** 2026-02-05 05:39:28.803521 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:28.803532 | orchestrator | 2026-02-05 05:39:28.803559 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 05:39:28.803570 | orchestrator | Thursday 05 February 2026 05:39:26 +0000 (0:00:01.149) 0:59:11.165 ***** 2026-02-05 05:39:28.803581 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:28.803592 | orchestrator | 2026-02-05 05:39:28.803603 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 05:39:28.803615 | orchestrator | Thursday 05 February 2026 05:39:27 +0000 (0:00:01.086) 0:59:12.251 ***** 2026-02-05 05:39:28.803626 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:28.803636 | orchestrator | 2026-02-05 05:39:28.803647 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 05:39:28.803658 | orchestrator | Thursday 05 February 2026 05:39:28 +0000 (0:00:01.149) 0:59:13.401 ***** 2026-02-05 05:39:28.803671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:39:28.803686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567', 'dm-uuid-LVM-rm93nYJXJvDmNv1mI2i0aCOQRWUNQlkCoPPr3WLpbHMBKwrxigfqk31Pio1T8A2M'], 'uuids': ['7cbe1ae0-472e-4015-9248-1616ea071c47'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M']}})  2026-02-05 05:39:28.803721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff', 'scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41a73991', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:39:28.803735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-VPbbSc-FYsx-oCa5-EK96-LSd2-FMne-gw3pzp', 'scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2', 'scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e']}})  2026-02-05 05:39:28.803758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:39:28.803770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:39:28.803787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:39:28.803800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:39:28.803811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV', 'dm-uuid-CRYPT-LUKS2-24caf7b252c344f2a02a18860df8d987-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:39:28.803830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:39:30.142473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e', 'dm-uuid-LVM-gjVz64L0xYhHucIQrbSWO4IaXeskE9njVHEBOKPFjChmvGixI0fMAnchfE228jrV'], 'uuids': ['24caf7b2-52c3-44f2-a02a-18860df8d987'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV']}})  2026-02-05 05:39:30.142604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-30TRfy-AcTU-PjNY-ZSvI-Ms8S-pTLw-T1Q2CW', 'scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b', 'scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567']}})  2026-02-05 05:39:30.142620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:39:30.142646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5fa98ac', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:39:30.142674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:39:30.142692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:39:30.142701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M', 'dm-uuid-CRYPT-LUKS2-7cbe1ae0472e401592481616ea071c47-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:39:30.142712 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:30.142722 | orchestrator | 2026-02-05 05:39:30.142732 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 05:39:30.142741 | orchestrator | Thursday 05 February 2026 05:39:29 +0000 (0:00:01.348) 0:59:14.749 ***** 2026-02-05 05:39:30.142750 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:30.142764 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567', 'dm-uuid-LVM-rm93nYJXJvDmNv1mI2i0aCOQRWUNQlkCoPPr3WLpbHMBKwrxigfqk31Pio1T8A2M'], 'uuids': ['7cbe1ae0-472e-4015-9248-1616ea071c47'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:30.142774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff', 'scsi-SQEMU_QEMU_HARDDISK_41a73991-c162-41f3-bbc6-bb80a44790ff'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '41a73991', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:30.142791 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-VPbbSc-FYsx-oCa5-EK96-LSd2-FMne-gw3pzp', 'scsi-0QEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2', 'scsi-SQEMU_QEMU_HARDDISK_67112651-7f80-4cd8-91f1-cb61626610a2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:31.302653 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:31.302787 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:31.302838 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:31.302863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:31.302883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV', 'dm-uuid-CRYPT-LUKS2-24caf7b252c344f2a02a18860df8d987-VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:31.302934 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:31.302980 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--de37fca4--ea41--596c--ab1a--50038d0e278e-osd--block--de37fca4--ea41--596c--ab1a--50038d0e278e', 'dm-uuid-LVM-gjVz64L0xYhHucIQrbSWO4IaXeskE9njVHEBOKPFjChmvGixI0fMAnchfE228jrV'], 'uuids': ['24caf7b2-52c3-44f2-a02a-18860df8d987'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '67112651', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VHEBOK-PFjC-hmvG-ixI0-fMAn-chfE-228jrV']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:31.303009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-30TRfy-AcTU-PjNY-ZSvI-Ms8S-pTLw-T1Q2CW', 'scsi-0QEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b', 'scsi-SQEMU_QEMU_HARDDISK_fbfcf598-94c5-41e4-b7a9-e869a71c977b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fbfcf598', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--825a1c54--3e62--51fa--b7a4--9af3e8833567-osd--block--825a1c54--3e62--51fa--b7a4--9af3e8833567']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:31.303033 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:31.303103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5fa98ac', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5fa98ac-44dd-4c0e-a983-67c120325b97-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:59.286006 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:59.286267 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:59.286295 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M', 'dm-uuid-CRYPT-LUKS2-7cbe1ae0472e401592481616ea071c47-oPPr3W-LpbH-MBKw-rxig-fqk3-1Pio-1T8A2M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:39:59.286316 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:59.286336 | orchestrator | 2026-02-05 05:39:59.286356 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 05:39:59.286403 | orchestrator | Thursday 05 February 2026 05:39:31 +0000 (0:00:01.366) 0:59:16.116 ***** 2026-02-05 05:39:59.286421 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:59.286438 | orchestrator | 2026-02-05 05:39:59.286454 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 05:39:59.286470 | orchestrator | Thursday 05 February 2026 05:39:32 +0000 (0:00:01.500) 0:59:17.616 ***** 2026-02-05 05:39:59.286485 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:59.286500 | orchestrator | 2026-02-05 05:39:59.286515 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:39:59.286531 | orchestrator | Thursday 05 February 2026 05:39:33 +0000 (0:00:01.157) 0:59:18.774 ***** 2026-02-05 05:39:59.286548 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:59.286565 | orchestrator | 2026-02-05 05:39:59.286582 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:39:59.286600 | orchestrator | Thursday 05 February 2026 05:39:35 +0000 (0:00:01.491) 0:59:20.265 ***** 2026-02-05 05:39:59.286615 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:59.286629 | orchestrator | 2026-02-05 05:39:59.286647 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:39:59.286663 | orchestrator | Thursday 05 February 2026 05:39:36 +0000 (0:00:01.141) 0:59:21.406 ***** 2026-02-05 05:39:59.286679 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:59.286695 | orchestrator | 2026-02-05 05:39:59.286710 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:39:59.286726 | orchestrator | Thursday 05 February 2026 05:39:37 +0000 (0:00:01.237) 0:59:22.644 ***** 2026-02-05 05:39:59.286742 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:59.286756 | orchestrator | 2026-02-05 05:39:59.286772 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 05:39:59.286790 | orchestrator | Thursday 05 February 2026 05:39:39 +0000 (0:00:01.214) 0:59:23.859 ***** 2026-02-05 05:39:59.286806 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-05 05:39:59.286824 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-05 05:39:59.286840 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-05 05:39:59.286855 | orchestrator | 2026-02-05 05:39:59.286872 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 05:39:59.286887 | orchestrator | Thursday 05 February 2026 05:39:40 +0000 (0:00:01.911) 0:59:25.770 ***** 2026-02-05 05:39:59.286903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 05:39:59.286919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 05:39:59.286934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 05:39:59.286950 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:59.286966 | orchestrator | 2026-02-05 05:39:59.286982 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 05:39:59.286999 | orchestrator | Thursday 05 February 2026 05:39:42 +0000 (0:00:01.160) 0:59:26.930 ***** 2026-02-05 05:39:59.287064 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-05 05:39:59.287077 | orchestrator | 2026-02-05 05:39:59.287088 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:39:59.287099 | orchestrator | Thursday 05 February 2026 05:39:43 +0000 (0:00:01.117) 0:59:28.048 ***** 2026-02-05 05:39:59.287109 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:59.287119 | orchestrator | 2026-02-05 05:39:59.287129 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:39:59.287139 | orchestrator | Thursday 05 February 2026 05:39:44 +0000 (0:00:01.121) 0:59:29.169 ***** 2026-02-05 05:39:59.287148 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:59.287158 | orchestrator | 2026-02-05 05:39:59.287168 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:39:59.287190 | orchestrator | Thursday 05 February 2026 05:39:45 +0000 (0:00:01.120) 0:59:30.290 ***** 2026-02-05 05:39:59.287200 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:59.287209 | orchestrator | 2026-02-05 05:39:59.287227 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:39:59.287237 | orchestrator | Thursday 05 February 2026 05:39:46 +0000 (0:00:01.108) 0:59:31.399 ***** 2026-02-05 05:39:59.287247 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:59.287257 | orchestrator | 2026-02-05 05:39:59.287267 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:39:59.287277 | orchestrator | Thursday 05 February 2026 05:39:47 +0000 (0:00:01.198) 0:59:32.597 ***** 2026-02-05 05:39:59.287286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:39:59.287296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:39:59.287306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:39:59.287316 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:59.287325 | orchestrator | 2026-02-05 05:39:59.287335 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:39:59.287345 | orchestrator | Thursday 05 February 2026 05:39:49 +0000 (0:00:01.449) 0:59:34.047 ***** 2026-02-05 05:39:59.287355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:39:59.287364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:39:59.287374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:39:59.287383 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:59.287393 | orchestrator | 2026-02-05 05:39:59.287403 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:39:59.287413 | orchestrator | Thursday 05 February 2026 05:39:50 +0000 (0:00:01.399) 0:59:35.447 ***** 2026-02-05 05:39:59.287422 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:39:59.287432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:39:59.287442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:39:59.287451 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:39:59.287461 | orchestrator | 2026-02-05 05:39:59.287471 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:39:59.287481 | orchestrator | Thursday 05 February 2026 05:39:52 +0000 (0:00:01.391) 0:59:36.839 ***** 2026-02-05 05:39:59.287490 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:39:59.287500 | orchestrator | 2026-02-05 05:39:59.287510 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:39:59.287520 | orchestrator | Thursday 05 February 2026 05:39:53 +0000 (0:00:01.243) 0:59:38.083 ***** 2026-02-05 05:39:59.287534 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 05:39:59.287550 | orchestrator | 2026-02-05 05:39:59.287576 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 05:39:59.287593 | orchestrator | Thursday 05 February 2026 05:39:54 +0000 (0:00:01.319) 0:59:39.403 ***** 2026-02-05 05:39:59.287607 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:39:59.287622 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:39:59.287637 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:39:59.287652 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 05:39:59.287667 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:39:59.287683 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:39:59.287697 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:39:59.287713 | orchestrator | 2026-02-05 05:39:59.287729 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 05:39:59.287757 | orchestrator | Thursday 05 February 2026 05:39:56 +0000 (0:00:02.144) 0:59:41.547 ***** 2026-02-05 05:39:59.287775 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:39:59.287791 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:39:59.287808 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:39:59.287825 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 05:39:59.287835 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:39:59.287845 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:39:59.287854 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:39:59.287864 | orchestrator | 2026-02-05 05:39:59.287883 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-05 05:40:51.875364 | orchestrator | Thursday 05 February 2026 05:39:59 +0000 (0:00:02.546) 0:59:44.093 ***** 2026-02-05 05:40:51.875507 | orchestrator | changed: [testbed-node-3] 2026-02-05 05:40:51.875535 | orchestrator | 2026-02-05 05:40:51.875555 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-05 05:40:51.875574 | orchestrator | Thursday 05 February 2026 05:40:01 +0000 (0:00:02.316) 0:59:46.410 ***** 2026-02-05 05:40:51.875591 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:40:51.875612 | orchestrator | 2026-02-05 05:40:51.875631 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-05 05:40:51.875650 | orchestrator | Thursday 05 February 2026 05:40:04 +0000 (0:00:03.090) 0:59:49.500 ***** 2026-02-05 05:40:51.875669 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:40:51.875687 | orchestrator | 2026-02-05 05:40:51.875725 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:40:51.875746 | orchestrator | Thursday 05 February 2026 05:40:07 +0000 (0:00:02.393) 0:59:51.893 ***** 2026-02-05 05:40:51.875765 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-05 05:40:51.875783 | orchestrator | 2026-02-05 05:40:51.875801 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 05:40:51.875819 | orchestrator | Thursday 05 February 2026 05:40:08 +0000 (0:00:01.142) 0:59:53.036 ***** 2026-02-05 05:40:51.875837 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-05 05:40:51.875856 | orchestrator | 2026-02-05 05:40:51.875874 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 05:40:51.875894 | orchestrator | Thursday 05 February 2026 05:40:09 +0000 (0:00:01.147) 0:59:54.183 ***** 2026-02-05 05:40:51.875914 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.875932 | orchestrator | 2026-02-05 05:40:51.875945 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 05:40:51.875959 | orchestrator | Thursday 05 February 2026 05:40:10 +0000 (0:00:01.137) 0:59:55.320 ***** 2026-02-05 05:40:51.875999 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:40:51.876013 | orchestrator | 2026-02-05 05:40:51.876026 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 05:40:51.876039 | orchestrator | Thursday 05 February 2026 05:40:12 +0000 (0:00:01.510) 0:59:56.831 ***** 2026-02-05 05:40:51.876051 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:40:51.876064 | orchestrator | 2026-02-05 05:40:51.876076 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 05:40:51.876089 | orchestrator | Thursday 05 February 2026 05:40:13 +0000 (0:00:01.527) 0:59:58.358 ***** 2026-02-05 05:40:51.876102 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:40:51.876115 | orchestrator | 2026-02-05 05:40:51.876149 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 05:40:51.876162 | orchestrator | Thursday 05 February 2026 05:40:15 +0000 (0:00:01.497) 0:59:59.856 ***** 2026-02-05 05:40:51.876175 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.876188 | orchestrator | 2026-02-05 05:40:51.876201 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 05:40:51.876214 | orchestrator | Thursday 05 February 2026 05:40:16 +0000 (0:00:01.121) 1:00:00.977 ***** 2026-02-05 05:40:51.876226 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.876239 | orchestrator | 2026-02-05 05:40:51.876250 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 05:40:51.876260 | orchestrator | Thursday 05 February 2026 05:40:17 +0000 (0:00:01.110) 1:00:02.088 ***** 2026-02-05 05:40:51.876271 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.876282 | orchestrator | 2026-02-05 05:40:51.876293 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 05:40:51.876304 | orchestrator | Thursday 05 February 2026 05:40:18 +0000 (0:00:01.113) 1:00:03.201 ***** 2026-02-05 05:40:51.876315 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:40:51.876326 | orchestrator | 2026-02-05 05:40:51.876336 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 05:40:51.876347 | orchestrator | Thursday 05 February 2026 05:40:19 +0000 (0:00:01.574) 1:00:04.775 ***** 2026-02-05 05:40:51.876358 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:40:51.876369 | orchestrator | 2026-02-05 05:40:51.876379 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 05:40:51.876390 | orchestrator | Thursday 05 February 2026 05:40:21 +0000 (0:00:01.586) 1:00:06.362 ***** 2026-02-05 05:40:51.876401 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.876412 | orchestrator | 2026-02-05 05:40:51.876423 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:40:51.876433 | orchestrator | Thursday 05 February 2026 05:40:22 +0000 (0:00:01.144) 1:00:07.507 ***** 2026-02-05 05:40:51.876444 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.876455 | orchestrator | 2026-02-05 05:40:51.876465 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:40:51.876476 | orchestrator | Thursday 05 February 2026 05:40:23 +0000 (0:00:01.176) 1:00:08.684 ***** 2026-02-05 05:40:51.876487 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:40:51.876498 | orchestrator | 2026-02-05 05:40:51.876508 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:40:51.876519 | orchestrator | Thursday 05 February 2026 05:40:24 +0000 (0:00:01.137) 1:00:09.821 ***** 2026-02-05 05:40:51.876530 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:40:51.876541 | orchestrator | 2026-02-05 05:40:51.876551 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:40:51.876562 | orchestrator | Thursday 05 February 2026 05:40:26 +0000 (0:00:01.148) 1:00:10.970 ***** 2026-02-05 05:40:51.876573 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:40:51.876584 | orchestrator | 2026-02-05 05:40:51.876613 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:40:51.876625 | orchestrator | Thursday 05 February 2026 05:40:27 +0000 (0:00:01.121) 1:00:12.092 ***** 2026-02-05 05:40:51.876636 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.876646 | orchestrator | 2026-02-05 05:40:51.876657 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:40:51.876668 | orchestrator | Thursday 05 February 2026 05:40:28 +0000 (0:00:01.146) 1:00:13.239 ***** 2026-02-05 05:40:51.876679 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.876689 | orchestrator | 2026-02-05 05:40:51.876700 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:40:51.876711 | orchestrator | Thursday 05 February 2026 05:40:29 +0000 (0:00:01.112) 1:00:14.351 ***** 2026-02-05 05:40:51.876721 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.876732 | orchestrator | 2026-02-05 05:40:51.876751 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:40:51.876762 | orchestrator | Thursday 05 February 2026 05:40:30 +0000 (0:00:01.103) 1:00:15.455 ***** 2026-02-05 05:40:51.876773 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:40:51.876784 | orchestrator | 2026-02-05 05:40:51.876802 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:40:51.876813 | orchestrator | Thursday 05 February 2026 05:40:31 +0000 (0:00:01.130) 1:00:16.586 ***** 2026-02-05 05:40:51.876823 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:40:51.876834 | orchestrator | 2026-02-05 05:40:51.876845 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:40:51.876856 | orchestrator | Thursday 05 February 2026 05:40:32 +0000 (0:00:01.107) 1:00:17.694 ***** 2026-02-05 05:40:51.876866 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.876877 | orchestrator | 2026-02-05 05:40:51.876888 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:40:51.876899 | orchestrator | Thursday 05 February 2026 05:40:33 +0000 (0:00:01.092) 1:00:18.786 ***** 2026-02-05 05:40:51.876909 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.876920 | orchestrator | 2026-02-05 05:40:51.876931 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:40:51.876941 | orchestrator | Thursday 05 February 2026 05:40:35 +0000 (0:00:01.125) 1:00:19.912 ***** 2026-02-05 05:40:51.876952 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.876962 | orchestrator | 2026-02-05 05:40:51.876989 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:40:51.877000 | orchestrator | Thursday 05 February 2026 05:40:36 +0000 (0:00:01.179) 1:00:21.092 ***** 2026-02-05 05:40:51.877011 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.877021 | orchestrator | 2026-02-05 05:40:51.877032 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:40:51.877043 | orchestrator | Thursday 05 February 2026 05:40:37 +0000 (0:00:01.131) 1:00:22.223 ***** 2026-02-05 05:40:51.877054 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.877064 | orchestrator | 2026-02-05 05:40:51.877075 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:40:51.877085 | orchestrator | Thursday 05 February 2026 05:40:38 +0000 (0:00:01.130) 1:00:23.353 ***** 2026-02-05 05:40:51.877096 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.877107 | orchestrator | 2026-02-05 05:40:51.877131 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:40:51.877152 | orchestrator | Thursday 05 February 2026 05:40:39 +0000 (0:00:01.097) 1:00:24.451 ***** 2026-02-05 05:40:51.877163 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.877174 | orchestrator | 2026-02-05 05:40:51.877185 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:40:51.877197 | orchestrator | Thursday 05 February 2026 05:40:40 +0000 (0:00:01.165) 1:00:25.616 ***** 2026-02-05 05:40:51.877207 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.877218 | orchestrator | 2026-02-05 05:40:51.877229 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:40:51.877240 | orchestrator | Thursday 05 February 2026 05:40:41 +0000 (0:00:01.110) 1:00:26.727 ***** 2026-02-05 05:40:51.877250 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.877261 | orchestrator | 2026-02-05 05:40:51.877272 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:40:51.877282 | orchestrator | Thursday 05 February 2026 05:40:43 +0000 (0:00:01.106) 1:00:27.834 ***** 2026-02-05 05:40:51.877293 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.877304 | orchestrator | 2026-02-05 05:40:51.877314 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:40:51.877325 | orchestrator | Thursday 05 February 2026 05:40:44 +0000 (0:00:01.133) 1:00:28.968 ***** 2026-02-05 05:40:51.877335 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.877353 | orchestrator | 2026-02-05 05:40:51.877364 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:40:51.877374 | orchestrator | Thursday 05 February 2026 05:40:45 +0000 (0:00:01.106) 1:00:30.074 ***** 2026-02-05 05:40:51.877385 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:40:51.877396 | orchestrator | 2026-02-05 05:40:51.877407 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:40:51.877417 | orchestrator | Thursday 05 February 2026 05:40:46 +0000 (0:00:01.106) 1:00:31.181 ***** 2026-02-05 05:40:51.877428 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:40:51.877439 | orchestrator | 2026-02-05 05:40:51.877449 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:40:51.877460 | orchestrator | Thursday 05 February 2026 05:40:48 +0000 (0:00:01.968) 1:00:33.149 ***** 2026-02-05 05:40:51.877471 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:40:51.877481 | orchestrator | 2026-02-05 05:40:51.877492 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:40:51.877502 | orchestrator | Thursday 05 February 2026 05:40:50 +0000 (0:00:02.321) 1:00:35.471 ***** 2026-02-05 05:40:51.877513 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-05 05:40:51.877523 | orchestrator | 2026-02-05 05:40:51.877534 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 05:40:51.877552 | orchestrator | Thursday 05 February 2026 05:40:51 +0000 (0:00:01.212) 1:00:36.683 ***** 2026-02-05 05:41:38.441335 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441423 | orchestrator | 2026-02-05 05:41:38.441432 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 05:41:38.441439 | orchestrator | Thursday 05 February 2026 05:40:52 +0000 (0:00:01.116) 1:00:37.800 ***** 2026-02-05 05:41:38.441445 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441450 | orchestrator | 2026-02-05 05:41:38.441456 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 05:41:38.441461 | orchestrator | Thursday 05 February 2026 05:40:54 +0000 (0:00:01.142) 1:00:38.943 ***** 2026-02-05 05:41:38.441467 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 05:41:38.441472 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 05:41:38.441478 | orchestrator | 2026-02-05 05:41:38.441483 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 05:41:38.441501 | orchestrator | Thursday 05 February 2026 05:40:56 +0000 (0:00:01.884) 1:00:40.827 ***** 2026-02-05 05:41:38.441507 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:41:38.441514 | orchestrator | 2026-02-05 05:41:38.441519 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 05:41:38.441524 | orchestrator | Thursday 05 February 2026 05:40:57 +0000 (0:00:01.514) 1:00:42.342 ***** 2026-02-05 05:41:38.441529 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441534 | orchestrator | 2026-02-05 05:41:38.441539 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 05:41:38.441544 | orchestrator | Thursday 05 February 2026 05:40:58 +0000 (0:00:01.133) 1:00:43.475 ***** 2026-02-05 05:41:38.441549 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441554 | orchestrator | 2026-02-05 05:41:38.441559 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:41:38.441565 | orchestrator | Thursday 05 February 2026 05:40:59 +0000 (0:00:01.138) 1:00:44.614 ***** 2026-02-05 05:41:38.441570 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441575 | orchestrator | 2026-02-05 05:41:38.441580 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:41:38.441585 | orchestrator | Thursday 05 February 2026 05:41:00 +0000 (0:00:01.167) 1:00:45.781 ***** 2026-02-05 05:41:38.441590 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-05 05:41:38.441612 | orchestrator | 2026-02-05 05:41:38.441618 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 05:41:38.441623 | orchestrator | Thursday 05 February 2026 05:41:02 +0000 (0:00:01.114) 1:00:46.896 ***** 2026-02-05 05:41:38.441628 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:41:38.441633 | orchestrator | 2026-02-05 05:41:38.441638 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 05:41:38.441643 | orchestrator | Thursday 05 February 2026 05:41:03 +0000 (0:00:01.743) 1:00:48.639 ***** 2026-02-05 05:41:38.441649 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 05:41:38.441653 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 05:41:38.441659 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 05:41:38.441664 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441669 | orchestrator | 2026-02-05 05:41:38.441674 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 05:41:38.441679 | orchestrator | Thursday 05 February 2026 05:41:04 +0000 (0:00:01.160) 1:00:49.800 ***** 2026-02-05 05:41:38.441684 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441689 | orchestrator | 2026-02-05 05:41:38.441694 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 05:41:38.441699 | orchestrator | Thursday 05 February 2026 05:41:06 +0000 (0:00:01.118) 1:00:50.919 ***** 2026-02-05 05:41:38.441704 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441710 | orchestrator | 2026-02-05 05:41:38.441714 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 05:41:38.441720 | orchestrator | Thursday 05 February 2026 05:41:07 +0000 (0:00:01.173) 1:00:52.092 ***** 2026-02-05 05:41:38.441725 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441730 | orchestrator | 2026-02-05 05:41:38.441735 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 05:41:38.441740 | orchestrator | Thursday 05 February 2026 05:41:08 +0000 (0:00:01.192) 1:00:53.285 ***** 2026-02-05 05:41:38.441747 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441755 | orchestrator | 2026-02-05 05:41:38.441763 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 05:41:38.441772 | orchestrator | Thursday 05 February 2026 05:41:09 +0000 (0:00:01.125) 1:00:54.410 ***** 2026-02-05 05:41:38.441780 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441787 | orchestrator | 2026-02-05 05:41:38.441795 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:41:38.441803 | orchestrator | Thursday 05 February 2026 05:41:10 +0000 (0:00:01.165) 1:00:55.576 ***** 2026-02-05 05:41:38.441811 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:41:38.441818 | orchestrator | 2026-02-05 05:41:38.441825 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:41:38.441833 | orchestrator | Thursday 05 February 2026 05:41:13 +0000 (0:00:02.548) 1:00:58.124 ***** 2026-02-05 05:41:38.441841 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:41:38.441850 | orchestrator | 2026-02-05 05:41:38.441858 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:41:38.441866 | orchestrator | Thursday 05 February 2026 05:41:14 +0000 (0:00:01.130) 1:00:59.255 ***** 2026-02-05 05:41:38.441875 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-05 05:41:38.441882 | orchestrator | 2026-02-05 05:41:38.441891 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 05:41:38.441917 | orchestrator | Thursday 05 February 2026 05:41:15 +0000 (0:00:01.105) 1:01:00.360 ***** 2026-02-05 05:41:38.441925 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441931 | orchestrator | 2026-02-05 05:41:38.441957 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 05:41:38.441964 | orchestrator | Thursday 05 February 2026 05:41:16 +0000 (0:00:01.119) 1:01:01.479 ***** 2026-02-05 05:41:38.441977 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.441983 | orchestrator | 2026-02-05 05:41:38.441989 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 05:41:38.441996 | orchestrator | Thursday 05 February 2026 05:41:17 +0000 (0:00:01.145) 1:01:02.625 ***** 2026-02-05 05:41:38.442001 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.442007 | orchestrator | 2026-02-05 05:41:38.442052 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 05:41:38.442058 | orchestrator | Thursday 05 February 2026 05:41:18 +0000 (0:00:01.157) 1:01:03.783 ***** 2026-02-05 05:41:38.442063 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.442068 | orchestrator | 2026-02-05 05:41:38.442079 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 05:41:38.442100 | orchestrator | Thursday 05 February 2026 05:41:20 +0000 (0:00:01.124) 1:01:04.908 ***** 2026-02-05 05:41:38.442117 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.442126 | orchestrator | 2026-02-05 05:41:38.442134 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 05:41:38.442144 | orchestrator | Thursday 05 February 2026 05:41:21 +0000 (0:00:01.200) 1:01:06.108 ***** 2026-02-05 05:41:38.442149 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.442154 | orchestrator | 2026-02-05 05:41:38.442159 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 05:41:38.442165 | orchestrator | Thursday 05 February 2026 05:41:22 +0000 (0:00:01.115) 1:01:07.223 ***** 2026-02-05 05:41:38.442170 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.442175 | orchestrator | 2026-02-05 05:41:38.442180 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 05:41:38.442185 | orchestrator | Thursday 05 February 2026 05:41:23 +0000 (0:00:01.149) 1:01:08.373 ***** 2026-02-05 05:41:38.442190 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:41:38.442196 | orchestrator | 2026-02-05 05:41:38.442201 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 05:41:38.442206 | orchestrator | Thursday 05 February 2026 05:41:24 +0000 (0:00:01.143) 1:01:09.516 ***** 2026-02-05 05:41:38.442211 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:41:38.442216 | orchestrator | 2026-02-05 05:41:38.442221 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:41:38.442227 | orchestrator | Thursday 05 February 2026 05:41:25 +0000 (0:00:01.112) 1:01:10.629 ***** 2026-02-05 05:41:38.442232 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-05 05:41:38.442237 | orchestrator | 2026-02-05 05:41:38.442242 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 05:41:38.442248 | orchestrator | Thursday 05 February 2026 05:41:26 +0000 (0:00:01.099) 1:01:11.729 ***** 2026-02-05 05:41:38.442253 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-05 05:41:38.442258 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-05 05:41:38.442264 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-05 05:41:38.442269 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-05 05:41:38.442274 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-05 05:41:38.442279 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-05 05:41:38.442284 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-05 05:41:38.442290 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-05 05:41:38.442295 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 05:41:38.442300 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 05:41:38.442305 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 05:41:38.442311 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 05:41:38.442316 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 05:41:38.442327 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 05:41:38.442332 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-05 05:41:38.442337 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-05 05:41:38.442343 | orchestrator | 2026-02-05 05:41:38.442348 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:41:38.442353 | orchestrator | Thursday 05 February 2026 05:41:33 +0000 (0:00:06.876) 1:01:18.605 ***** 2026-02-05 05:41:38.442358 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-05 05:41:38.442363 | orchestrator | 2026-02-05 05:41:38.442368 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-05 05:41:38.442374 | orchestrator | Thursday 05 February 2026 05:41:34 +0000 (0:00:01.107) 1:01:19.712 ***** 2026-02-05 05:41:38.442379 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:41:38.442385 | orchestrator | 2026-02-05 05:41:38.442391 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-05 05:41:38.442396 | orchestrator | Thursday 05 February 2026 05:41:36 +0000 (0:00:01.517) 1:01:21.230 ***** 2026-02-05 05:41:38.442401 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:41:38.442406 | orchestrator | 2026-02-05 05:41:38.442411 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:41:38.442422 | orchestrator | Thursday 05 February 2026 05:41:38 +0000 (0:00:02.020) 1:01:23.251 ***** 2026-02-05 05:42:28.262994 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263116 | orchestrator | 2026-02-05 05:42:28.263130 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:42:28.263139 | orchestrator | Thursday 05 February 2026 05:41:39 +0000 (0:00:01.151) 1:01:24.402 ***** 2026-02-05 05:42:28.263151 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263163 | orchestrator | 2026-02-05 05:42:28.263176 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:42:28.263188 | orchestrator | Thursday 05 February 2026 05:41:40 +0000 (0:00:01.106) 1:01:25.508 ***** 2026-02-05 05:42:28.263215 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263237 | orchestrator | 2026-02-05 05:42:28.263246 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:42:28.263254 | orchestrator | Thursday 05 February 2026 05:41:41 +0000 (0:00:01.095) 1:01:26.603 ***** 2026-02-05 05:42:28.263261 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263269 | orchestrator | 2026-02-05 05:42:28.263292 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:42:28.263300 | orchestrator | Thursday 05 February 2026 05:41:42 +0000 (0:00:01.107) 1:01:27.711 ***** 2026-02-05 05:42:28.263307 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263315 | orchestrator | 2026-02-05 05:42:28.263322 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:42:28.263331 | orchestrator | Thursday 05 February 2026 05:41:44 +0000 (0:00:01.112) 1:01:28.823 ***** 2026-02-05 05:42:28.263338 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263345 | orchestrator | 2026-02-05 05:42:28.263353 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:42:28.263360 | orchestrator | Thursday 05 February 2026 05:41:45 +0000 (0:00:01.099) 1:01:29.923 ***** 2026-02-05 05:42:28.263367 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263375 | orchestrator | 2026-02-05 05:42:28.263382 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:42:28.263389 | orchestrator | Thursday 05 February 2026 05:41:46 +0000 (0:00:01.128) 1:01:31.052 ***** 2026-02-05 05:42:28.263397 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263422 | orchestrator | 2026-02-05 05:42:28.263430 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:42:28.263437 | orchestrator | Thursday 05 February 2026 05:41:47 +0000 (0:00:01.117) 1:01:32.169 ***** 2026-02-05 05:42:28.263445 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263452 | orchestrator | 2026-02-05 05:42:28.263460 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:42:28.263469 | orchestrator | Thursday 05 February 2026 05:41:48 +0000 (0:00:01.133) 1:01:33.303 ***** 2026-02-05 05:42:28.263478 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263486 | orchestrator | 2026-02-05 05:42:28.263495 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:42:28.263503 | orchestrator | Thursday 05 February 2026 05:41:49 +0000 (0:00:01.129) 1:01:34.432 ***** 2026-02-05 05:42:28.263511 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263520 | orchestrator | 2026-02-05 05:42:28.263528 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:42:28.263536 | orchestrator | Thursday 05 February 2026 05:41:50 +0000 (0:00:01.145) 1:01:35.578 ***** 2026-02-05 05:42:28.263545 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-05 05:42:28.263553 | orchestrator | 2026-02-05 05:42:28.263561 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:42:28.263570 | orchestrator | Thursday 05 February 2026 05:41:55 +0000 (0:00:04.563) 1:01:40.142 ***** 2026-02-05 05:42:28.263578 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:42:28.263587 | orchestrator | 2026-02-05 05:42:28.263596 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:42:28.263604 | orchestrator | Thursday 05 February 2026 05:41:56 +0000 (0:00:01.153) 1:01:41.296 ***** 2026-02-05 05:42:28.263614 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-05 05:42:28.263626 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-05 05:42:28.263635 | orchestrator | 2026-02-05 05:42:28.263644 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:42:28.263653 | orchestrator | Thursday 05 February 2026 05:42:01 +0000 (0:00:05.145) 1:01:46.442 ***** 2026-02-05 05:42:28.263661 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263669 | orchestrator | 2026-02-05 05:42:28.263677 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:42:28.263686 | orchestrator | Thursday 05 February 2026 05:42:02 +0000 (0:00:01.186) 1:01:47.629 ***** 2026-02-05 05:42:28.263694 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263703 | orchestrator | 2026-02-05 05:42:28.263712 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:42:28.263736 | orchestrator | Thursday 05 February 2026 05:42:03 +0000 (0:00:01.152) 1:01:48.781 ***** 2026-02-05 05:42:28.263744 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263752 | orchestrator | 2026-02-05 05:42:28.263759 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:42:28.263767 | orchestrator | Thursday 05 February 2026 05:42:05 +0000 (0:00:01.131) 1:01:49.913 ***** 2026-02-05 05:42:28.263774 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263787 | orchestrator | 2026-02-05 05:42:28.263795 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:42:28.263802 | orchestrator | Thursday 05 February 2026 05:42:06 +0000 (0:00:01.158) 1:01:51.071 ***** 2026-02-05 05:42:28.263810 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263817 | orchestrator | 2026-02-05 05:42:28.263824 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:42:28.263831 | orchestrator | Thursday 05 February 2026 05:42:07 +0000 (0:00:01.120) 1:01:52.192 ***** 2026-02-05 05:42:28.263843 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:42:28.263852 | orchestrator | 2026-02-05 05:42:28.263859 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:42:28.263866 | orchestrator | Thursday 05 February 2026 05:42:08 +0000 (0:00:01.266) 1:01:53.458 ***** 2026-02-05 05:42:28.263874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:42:28.263881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:42:28.263906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:42:28.263914 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263922 | orchestrator | 2026-02-05 05:42:28.263929 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:42:28.263936 | orchestrator | Thursday 05 February 2026 05:42:10 +0000 (0:00:01.395) 1:01:54.854 ***** 2026-02-05 05:42:28.263944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:42:28.263951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:42:28.263958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:42:28.263966 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.263973 | orchestrator | 2026-02-05 05:42:28.263980 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:42:28.263988 | orchestrator | Thursday 05 February 2026 05:42:11 +0000 (0:00:01.416) 1:01:56.271 ***** 2026-02-05 05:42:28.263995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 05:42:28.264003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 05:42:28.264010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 05:42:28.264017 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.264024 | orchestrator | 2026-02-05 05:42:28.264032 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:42:28.264039 | orchestrator | Thursday 05 February 2026 05:42:12 +0000 (0:00:01.362) 1:01:57.634 ***** 2026-02-05 05:42:28.264046 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:42:28.264053 | orchestrator | 2026-02-05 05:42:28.264061 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:42:28.264068 | orchestrator | Thursday 05 February 2026 05:42:13 +0000 (0:00:01.149) 1:01:58.783 ***** 2026-02-05 05:42:28.264075 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 05:42:28.264084 | orchestrator | 2026-02-05 05:42:28.264096 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:42:28.264110 | orchestrator | Thursday 05 February 2026 05:42:15 +0000 (0:00:01.380) 1:02:00.163 ***** 2026-02-05 05:42:28.264121 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:42:28.264134 | orchestrator | 2026-02-05 05:42:28.264146 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-05 05:42:28.264159 | orchestrator | Thursday 05 February 2026 05:42:17 +0000 (0:00:01.832) 1:02:01.996 ***** 2026-02-05 05:42:28.264173 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-02-05 05:42:28.264186 | orchestrator | 2026-02-05 05:42:28.264198 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-05 05:42:28.264211 | orchestrator | Thursday 05 February 2026 05:42:18 +0000 (0:00:01.446) 1:02:03.443 ***** 2026-02-05 05:42:28.264220 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:42:28.264227 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 05:42:28.264241 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 05:42:28.264249 | orchestrator | 2026-02-05 05:42:28.264256 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:42:28.264263 | orchestrator | Thursday 05 February 2026 05:42:22 +0000 (0:00:03.392) 1:02:06.835 ***** 2026-02-05 05:42:28.264270 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-05 05:42:28.264278 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 05:42:28.264285 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:42:28.264292 | orchestrator | 2026-02-05 05:42:28.264300 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-05 05:42:28.264307 | orchestrator | Thursday 05 February 2026 05:42:24 +0000 (0:00:01.990) 1:02:08.826 ***** 2026-02-05 05:42:28.264314 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:42:28.264322 | orchestrator | 2026-02-05 05:42:28.264329 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-05 05:42:28.264336 | orchestrator | Thursday 05 February 2026 05:42:25 +0000 (0:00:01.122) 1:02:09.949 ***** 2026-02-05 05:42:28.264344 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-02-05 05:42:28.264352 | orchestrator | 2026-02-05 05:42:28.264359 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-05 05:42:28.264366 | orchestrator | Thursday 05 February 2026 05:42:26 +0000 (0:00:01.469) 1:02:11.418 ***** 2026-02-05 05:42:28.264380 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:43:47.189742 | orchestrator | 2026-02-05 05:43:47.189868 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-05 05:43:47.189878 | orchestrator | Thursday 05 February 2026 05:42:28 +0000 (0:00:01.654) 1:02:13.073 ***** 2026-02-05 05:43:47.189883 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:43:47.189889 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-05 05:43:47.189894 | orchestrator | 2026-02-05 05:43:47.189898 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-05 05:43:47.189902 | orchestrator | Thursday 05 February 2026 05:42:33 +0000 (0:00:05.693) 1:02:18.767 ***** 2026-02-05 05:43:47.189918 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:43:47.189923 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 05:43:47.189927 | orchestrator | 2026-02-05 05:43:47.189932 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:43:47.189936 | orchestrator | Thursday 05 February 2026 05:42:37 +0000 (0:00:03.232) 1:02:22.000 ***** 2026-02-05 05:43:47.189940 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-05 05:43:47.189944 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:43:47.189949 | orchestrator | 2026-02-05 05:43:47.189953 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-05 05:43:47.189956 | orchestrator | Thursday 05 February 2026 05:42:39 +0000 (0:00:02.061) 1:02:24.062 ***** 2026-02-05 05:43:47.189960 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-05 05:43:47.189964 | orchestrator | 2026-02-05 05:43:47.189968 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-05 05:43:47.189972 | orchestrator | Thursday 05 February 2026 05:42:40 +0000 (0:00:01.581) 1:02:25.643 ***** 2026-02-05 05:43:47.189976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:43:47.189980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:43:47.190000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:43:47.190004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:43:47.190008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:43:47.190012 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:43:47.190051 | orchestrator | 2026-02-05 05:43:47.190055 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-05 05:43:47.190059 | orchestrator | Thursday 05 February 2026 05:42:42 +0000 (0:00:01.546) 1:02:27.190 ***** 2026-02-05 05:43:47.190068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:43:47.190073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:43:47.190077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:43:47.190081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:43:47.190085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:43:47.190089 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:43:47.190092 | orchestrator | 2026-02-05 05:43:47.190096 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-05 05:43:47.190100 | orchestrator | Thursday 05 February 2026 05:42:43 +0000 (0:00:01.591) 1:02:28.782 ***** 2026-02-05 05:43:47.190104 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:43:47.190109 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:43:47.190113 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:43:47.190118 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:43:47.190123 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:43:47.190127 | orchestrator | 2026-02-05 05:43:47.190131 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-05 05:43:47.190147 | orchestrator | Thursday 05 February 2026 05:43:19 +0000 (0:00:35.733) 1:03:04.516 ***** 2026-02-05 05:43:47.190151 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:43:47.190155 | orchestrator | 2026-02-05 05:43:47.190159 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-05 05:43:47.190162 | orchestrator | Thursday 05 February 2026 05:43:20 +0000 (0:00:01.107) 1:03:05.624 ***** 2026-02-05 05:43:47.190166 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:43:47.190170 | orchestrator | 2026-02-05 05:43:47.190174 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-05 05:43:47.190178 | orchestrator | Thursday 05 February 2026 05:43:21 +0000 (0:00:01.099) 1:03:06.723 ***** 2026-02-05 05:43:47.190182 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-02-05 05:43:47.190185 | orchestrator | 2026-02-05 05:43:47.190189 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-05 05:43:47.190197 | orchestrator | Thursday 05 February 2026 05:43:23 +0000 (0:00:01.462) 1:03:08.186 ***** 2026-02-05 05:43:47.190208 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-02-05 05:43:47.190212 | orchestrator | 2026-02-05 05:43:47.190216 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-05 05:43:47.190220 | orchestrator | Thursday 05 February 2026 05:43:24 +0000 (0:00:01.468) 1:03:09.655 ***** 2026-02-05 05:43:47.190224 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:43:47.190228 | orchestrator | 2026-02-05 05:43:47.190232 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-05 05:43:47.190236 | orchestrator | Thursday 05 February 2026 05:43:26 +0000 (0:00:02.048) 1:03:11.704 ***** 2026-02-05 05:43:47.190240 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:43:47.190243 | orchestrator | 2026-02-05 05:43:47.190247 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-05 05:43:47.190251 | orchestrator | Thursday 05 February 2026 05:43:28 +0000 (0:00:01.977) 1:03:13.681 ***** 2026-02-05 05:43:47.190255 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:43:47.190259 | orchestrator | 2026-02-05 05:43:47.190263 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-05 05:43:47.190266 | orchestrator | Thursday 05 February 2026 05:43:31 +0000 (0:00:02.298) 1:03:15.980 ***** 2026-02-05 05:43:47.190270 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 05:43:47.190274 | orchestrator | 2026-02-05 05:43:47.190278 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-05 05:43:47.190282 | orchestrator | 2026-02-05 05:43:47.190286 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:43:47.190289 | orchestrator | Thursday 05 February 2026 05:43:34 +0000 (0:00:03.196) 1:03:19.177 ***** 2026-02-05 05:43:47.190293 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-05 05:43:47.190298 | orchestrator | 2026-02-05 05:43:47.190302 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 05:43:47.190307 | orchestrator | Thursday 05 February 2026 05:43:35 +0000 (0:00:01.133) 1:03:20.310 ***** 2026-02-05 05:43:47.190311 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:43:47.190315 | orchestrator | 2026-02-05 05:43:47.190320 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 05:43:47.190324 | orchestrator | Thursday 05 February 2026 05:43:36 +0000 (0:00:01.443) 1:03:21.754 ***** 2026-02-05 05:43:47.190329 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:43:47.190333 | orchestrator | 2026-02-05 05:43:47.190337 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:43:47.190341 | orchestrator | Thursday 05 February 2026 05:43:38 +0000 (0:00:01.109) 1:03:22.863 ***** 2026-02-05 05:43:47.190346 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:43:47.190350 | orchestrator | 2026-02-05 05:43:47.190354 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:43:47.190359 | orchestrator | Thursday 05 February 2026 05:43:39 +0000 (0:00:01.516) 1:03:24.380 ***** 2026-02-05 05:43:47.190363 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:43:47.190368 | orchestrator | 2026-02-05 05:43:47.190372 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 05:43:47.190377 | orchestrator | Thursday 05 February 2026 05:43:40 +0000 (0:00:01.124) 1:03:25.504 ***** 2026-02-05 05:43:47.190381 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:43:47.190385 | orchestrator | 2026-02-05 05:43:47.190389 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 05:43:47.190394 | orchestrator | Thursday 05 February 2026 05:43:41 +0000 (0:00:01.128) 1:03:26.633 ***** 2026-02-05 05:43:47.190398 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:43:47.190402 | orchestrator | 2026-02-05 05:43:47.190407 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 05:43:47.190411 | orchestrator | Thursday 05 February 2026 05:43:42 +0000 (0:00:01.141) 1:03:27.775 ***** 2026-02-05 05:43:47.190419 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:43:47.190424 | orchestrator | 2026-02-05 05:43:47.190428 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 05:43:47.190432 | orchestrator | Thursday 05 February 2026 05:43:44 +0000 (0:00:01.156) 1:03:28.931 ***** 2026-02-05 05:43:47.190437 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:43:47.190441 | orchestrator | 2026-02-05 05:43:47.190446 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 05:43:47.190450 | orchestrator | Thursday 05 February 2026 05:43:45 +0000 (0:00:01.110) 1:03:30.041 ***** 2026-02-05 05:43:47.190455 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:43:47.190459 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:43:47.190464 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:43:47.190468 | orchestrator | 2026-02-05 05:43:47.190473 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 05:43:47.190480 | orchestrator | Thursday 05 February 2026 05:43:47 +0000 (0:00:01.958) 1:03:32.000 ***** 2026-02-05 05:44:11.846403 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:44:11.866502 | orchestrator | 2026-02-05 05:44:11.866575 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 05:44:11.866588 | orchestrator | Thursday 05 February 2026 05:43:48 +0000 (0:00:01.242) 1:03:33.242 ***** 2026-02-05 05:44:11.866597 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:44:11.866606 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:44:11.866614 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:44:11.866621 | orchestrator | 2026-02-05 05:44:11.866629 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 05:44:11.866658 | orchestrator | Thursday 05 February 2026 05:43:51 +0000 (0:00:03.054) 1:03:36.297 ***** 2026-02-05 05:44:11.866668 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 05:44:11.866676 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 05:44:11.866684 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 05:44:11.866692 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:44:11.866700 | orchestrator | 2026-02-05 05:44:11.866707 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 05:44:11.866714 | orchestrator | Thursday 05 February 2026 05:43:52 +0000 (0:00:01.407) 1:03:37.705 ***** 2026-02-05 05:44:11.866723 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 05:44:11.866733 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 05:44:11.866740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 05:44:11.866748 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:44:11.866756 | orchestrator | 2026-02-05 05:44:11.866764 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 05:44:11.866771 | orchestrator | Thursday 05 February 2026 05:43:54 +0000 (0:00:01.615) 1:03:39.321 ***** 2026-02-05 05:44:11.866782 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:11.866849 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:11.866858 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:11.866865 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:44:11.866872 | orchestrator | 2026-02-05 05:44:11.866879 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 05:44:11.866886 | orchestrator | Thursday 05 February 2026 05:43:55 +0000 (0:00:01.179) 1:03:40.500 ***** 2026-02-05 05:44:11.866924 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 05:43:49.054947', 'end': '2026-02-05 05:43:49.107637', 'delta': '0:00:00.052690', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 05:44:11.866943 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 05:43:49.648492', 'end': '2026-02-05 05:43:49.696427', 'delta': '0:00:00.047935', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 05:44:11.866951 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9163e99c5c4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 05:43:50.253470', 'end': '2026-02-05 05:43:50.304127', 'delta': '0:00:00.050657', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9163e99c5c4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 05:44:11.866958 | orchestrator | 2026-02-05 05:44:11.866965 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 05:44:11.866972 | orchestrator | Thursday 05 February 2026 05:43:56 +0000 (0:00:01.187) 1:03:41.687 ***** 2026-02-05 05:44:11.866986 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:44:11.866993 | orchestrator | 2026-02-05 05:44:11.867001 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 05:44:11.867009 | orchestrator | Thursday 05 February 2026 05:43:58 +0000 (0:00:01.280) 1:03:42.967 ***** 2026-02-05 05:44:11.867016 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:44:11.867023 | orchestrator | 2026-02-05 05:44:11.867030 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 05:44:11.867036 | orchestrator | Thursday 05 February 2026 05:43:59 +0000 (0:00:01.263) 1:03:44.231 ***** 2026-02-05 05:44:11.867043 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:44:11.867050 | orchestrator | 2026-02-05 05:44:11.867056 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 05:44:11.867063 | orchestrator | Thursday 05 February 2026 05:44:00 +0000 (0:00:01.120) 1:03:45.352 ***** 2026-02-05 05:44:11.867070 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:44:11.867077 | orchestrator | 2026-02-05 05:44:11.867084 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:44:11.867092 | orchestrator | Thursday 05 February 2026 05:44:02 +0000 (0:00:02.010) 1:03:47.363 ***** 2026-02-05 05:44:11.867099 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:44:11.867107 | orchestrator | 2026-02-05 05:44:11.867115 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 05:44:11.867122 | orchestrator | Thursday 05 February 2026 05:44:03 +0000 (0:00:01.121) 1:03:48.484 ***** 2026-02-05 05:44:11.867128 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:44:11.867135 | orchestrator | 2026-02-05 05:44:11.867142 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 05:44:11.867149 | orchestrator | Thursday 05 February 2026 05:44:04 +0000 (0:00:01.090) 1:03:49.574 ***** 2026-02-05 05:44:11.867156 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:44:11.867163 | orchestrator | 2026-02-05 05:44:11.867170 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:44:11.867178 | orchestrator | Thursday 05 February 2026 05:44:05 +0000 (0:00:01.235) 1:03:50.810 ***** 2026-02-05 05:44:11.867185 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:44:11.867193 | orchestrator | 2026-02-05 05:44:11.867201 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 05:44:11.867208 | orchestrator | Thursday 05 February 2026 05:44:07 +0000 (0:00:01.102) 1:03:51.913 ***** 2026-02-05 05:44:11.867215 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:44:11.867223 | orchestrator | 2026-02-05 05:44:11.867231 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 05:44:11.867238 | orchestrator | Thursday 05 February 2026 05:44:08 +0000 (0:00:01.155) 1:03:53.068 ***** 2026-02-05 05:44:11.867244 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:44:11.867252 | orchestrator | 2026-02-05 05:44:11.867259 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 05:44:11.867267 | orchestrator | Thursday 05 February 2026 05:44:09 +0000 (0:00:01.257) 1:03:54.325 ***** 2026-02-05 05:44:11.867274 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:44:11.867282 | orchestrator | 2026-02-05 05:44:11.867288 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 05:44:11.867296 | orchestrator | Thursday 05 February 2026 05:44:10 +0000 (0:00:01.140) 1:03:55.466 ***** 2026-02-05 05:44:11.867303 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:44:11.867310 | orchestrator | 2026-02-05 05:44:11.867317 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 05:44:11.867334 | orchestrator | Thursday 05 February 2026 05:44:11 +0000 (0:00:01.189) 1:03:56.656 ***** 2026-02-05 05:44:14.349722 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:44:14.349858 | orchestrator | 2026-02-05 05:44:14.349878 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 05:44:14.349890 | orchestrator | Thursday 05 February 2026 05:44:12 +0000 (0:00:01.096) 1:03:57.752 ***** 2026-02-05 05:44:14.349927 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:44:14.349938 | orchestrator | 2026-02-05 05:44:14.349947 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 05:44:14.349958 | orchestrator | Thursday 05 February 2026 05:44:14 +0000 (0:00:01.196) 1:03:58.949 ***** 2026-02-05 05:44:14.349985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:44:14.350002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c', 'dm-uuid-LVM-5TLZe1Tgo1TKM8GkjUpfN78ieh5w0ANrQNgi2dmi5diYRe7Lgm9DH3wMJKHbVGFu'], 'uuids': ['4b1d437a-dc47-4238-b645-763e611994c7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu']}})  2026-02-05 05:44:14.350060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a', 'scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '64f88b59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:44:14.350070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-K9GKOz-fxxR-Pm8N-aWMy-HniX-e8kz-eif3cf', 'scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a', 'scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625']}})  2026-02-05 05:44:14.350078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:44:14.350085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:44:14.350108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:44:14.350128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:44:14.350135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ', 'dm-uuid-CRYPT-LUKS2-2c590a41d7cb49b2bfdc5ce322fde490-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:44:14.350141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:44:14.350148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625', 'dm-uuid-LVM-9Y06a2zVor1lRD1cyPlucPXWC0aPbN2JxLYAdcU08G9AXF4NeOKXZ9V1sHvTv2MQ'], 'uuids': ['2c590a41-d7cb-49b2-bfdc-5ce322fde490'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ']}})  2026-02-05 05:44:14.350155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Pz8pQL-5OmI-WkJt-J5Qa-2PBj-Qacj-FgSo8f', 'scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930', 'scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c']}})  2026-02-05 05:44:14.350162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:44:14.350181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5aaaa4a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:44:15.682987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:44:15.683097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:44:15.683129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu', 'dm-uuid-CRYPT-LUKS2-4b1d437adc474238b645763e611994c7-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:44:15.683157 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:44:15.683179 | orchestrator | 2026-02-05 05:44:15.683198 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 05:44:15.683216 | orchestrator | Thursday 05 February 2026 05:44:15 +0000 (0:00:01.340) 1:04:00.290 ***** 2026-02-05 05:44:15.683236 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:15.683307 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c', 'dm-uuid-LVM-5TLZe1Tgo1TKM8GkjUpfN78ieh5w0ANrQNgi2dmi5diYRe7Lgm9DH3wMJKHbVGFu'], 'uuids': ['4b1d437a-dc47-4238-b645-763e611994c7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:15.683330 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a', 'scsi-SQEMU_QEMU_HARDDISK_64f88b59-145a-4204-a5cc-35bb4626474a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '64f88b59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:15.683375 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-K9GKOz-fxxR-Pm8N-aWMy-HniX-e8kz-eif3cf', 'scsi-0QEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a', 'scsi-SQEMU_QEMU_HARDDISK_9d4195ed-cd70-4bda-970e-203e54c5de2a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:15.683398 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:15.683418 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:15.683451 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:15.683481 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:15.683511 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ', 'dm-uuid-CRYPT-LUKS2-2c590a41d7cb49b2bfdc5ce322fde490-xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:20.994007 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:20.994122 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--599b5b3c--37df--591b--a248--24d26d466625-osd--block--599b5b3c--37df--591b--a248--24d26d466625', 'dm-uuid-LVM-9Y06a2zVor1lRD1cyPlucPXWC0aPbN2JxLYAdcU08G9AXF4NeOKXZ9V1sHvTv2MQ'], 'uuids': ['2c590a41-d7cb-49b2-bfdc-5ce322fde490'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9d4195ed', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xLYAdc-U08G-9AXF-4NeO-KXZ9-V1sH-vTv2MQ']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:20.994149 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Pz8pQL-5OmI-WkJt-J5Qa-2PBj-Qacj-FgSo8f', 'scsi-0QEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930', 'scsi-SQEMU_QEMU_HARDDISK_46213c6d-7232-49e5-8bd8-8f24dba1e930'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '46213c6d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c-osd--block--f66c2ad0--d8eb--5a81--b3e8--9df8f695bb6c']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:20.994167 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:20.994187 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f5aaaa4a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1', 'scsi-SQEMU_QEMU_HARDDISK_f5aaaa4a-8fb3-4e0e-822f-f1ff89bc5dde-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:20.994200 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:20.994206 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:20.994215 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu', 'dm-uuid-CRYPT-LUKS2-4b1d437adc474238b645763e611994c7-QNgi2d-mi5d-iYRe-7Lgm-9DH3-wMJK-HbVGFu'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:44:20.994222 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:44:20.994230 | orchestrator | 2026-02-05 05:44:20.994237 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 05:44:20.994244 | orchestrator | Thursday 05 February 2026 05:44:16 +0000 (0:00:01.386) 1:04:01.677 ***** 2026-02-05 05:44:20.994250 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:44:20.994256 | orchestrator | 2026-02-05 05:44:20.994262 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 05:44:20.994267 | orchestrator | Thursday 05 February 2026 05:44:18 +0000 (0:00:01.537) 1:04:03.214 ***** 2026-02-05 05:44:20.994273 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:44:20.994279 | orchestrator | 2026-02-05 05:44:20.994284 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:44:20.994290 | orchestrator | Thursday 05 February 2026 05:44:19 +0000 (0:00:01.120) 1:04:04.335 ***** 2026-02-05 05:44:20.994295 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:44:20.994300 | orchestrator | 2026-02-05 05:44:20.994306 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:44:20.994315 | orchestrator | Thursday 05 February 2026 05:44:20 +0000 (0:00:01.474) 1:04:05.809 ***** 2026-02-05 05:45:02.340510 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.340616 | orchestrator | 2026-02-05 05:45:02.340630 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:45:02.340639 | orchestrator | Thursday 05 February 2026 05:44:22 +0000 (0:00:01.119) 1:04:06.929 ***** 2026-02-05 05:45:02.340645 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.340652 | orchestrator | 2026-02-05 05:45:02.340658 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:45:02.340665 | orchestrator | Thursday 05 February 2026 05:44:23 +0000 (0:00:01.550) 1:04:08.480 ***** 2026-02-05 05:45:02.340672 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.340698 | orchestrator | 2026-02-05 05:45:02.340706 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 05:45:02.340712 | orchestrator | Thursday 05 February 2026 05:44:24 +0000 (0:00:01.130) 1:04:09.610 ***** 2026-02-05 05:45:02.340719 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-05 05:45:02.340726 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-05 05:45:02.340732 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-05 05:45:02.340784 | orchestrator | 2026-02-05 05:45:02.340791 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 05:45:02.340797 | orchestrator | Thursday 05 February 2026 05:44:26 +0000 (0:00:01.697) 1:04:11.308 ***** 2026-02-05 05:45:02.340804 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 05:45:02.340811 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 05:45:02.340818 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 05:45:02.340824 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.340830 | orchestrator | 2026-02-05 05:45:02.340837 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 05:45:02.340844 | orchestrator | Thursday 05 February 2026 05:44:27 +0000 (0:00:01.133) 1:04:12.441 ***** 2026-02-05 05:45:02.340851 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-05 05:45:02.340859 | orchestrator | 2026-02-05 05:45:02.340867 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:45:02.340875 | orchestrator | Thursday 05 February 2026 05:44:28 +0000 (0:00:01.158) 1:04:13.600 ***** 2026-02-05 05:45:02.340882 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.340889 | orchestrator | 2026-02-05 05:45:02.340895 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:45:02.340902 | orchestrator | Thursday 05 February 2026 05:44:29 +0000 (0:00:01.127) 1:04:14.728 ***** 2026-02-05 05:45:02.340909 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.340916 | orchestrator | 2026-02-05 05:45:02.340922 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:45:02.340929 | orchestrator | Thursday 05 February 2026 05:44:31 +0000 (0:00:01.131) 1:04:15.859 ***** 2026-02-05 05:45:02.340936 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.340942 | orchestrator | 2026-02-05 05:45:02.340949 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:45:02.340956 | orchestrator | Thursday 05 February 2026 05:44:32 +0000 (0:00:01.141) 1:04:17.001 ***** 2026-02-05 05:45:02.340962 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:02.340969 | orchestrator | 2026-02-05 05:45:02.340975 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:45:02.340982 | orchestrator | Thursday 05 February 2026 05:44:33 +0000 (0:00:01.232) 1:04:18.233 ***** 2026-02-05 05:45:02.340989 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 05:45:02.340995 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 05:45:02.341002 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 05:45:02.341009 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.341016 | orchestrator | 2026-02-05 05:45:02.341037 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:45:02.341044 | orchestrator | Thursday 05 February 2026 05:44:34 +0000 (0:00:01.379) 1:04:19.613 ***** 2026-02-05 05:45:02.341050 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 05:45:02.341057 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 05:45:02.341063 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 05:45:02.341070 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.341077 | orchestrator | 2026-02-05 05:45:02.341083 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:45:02.341097 | orchestrator | Thursday 05 February 2026 05:44:36 +0000 (0:00:01.732) 1:04:21.345 ***** 2026-02-05 05:45:02.341104 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 05:45:02.341110 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 05:45:02.341117 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 05:45:02.341123 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.341130 | orchestrator | 2026-02-05 05:45:02.341137 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:45:02.341143 | orchestrator | Thursday 05 February 2026 05:44:38 +0000 (0:00:01.670) 1:04:23.016 ***** 2026-02-05 05:45:02.341149 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:02.341156 | orchestrator | 2026-02-05 05:45:02.341163 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:45:02.341169 | orchestrator | Thursday 05 February 2026 05:44:39 +0000 (0:00:01.193) 1:04:24.209 ***** 2026-02-05 05:45:02.341176 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 05:45:02.341183 | orchestrator | 2026-02-05 05:45:02.341189 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 05:45:02.341196 | orchestrator | Thursday 05 February 2026 05:44:40 +0000 (0:00:01.320) 1:04:25.530 ***** 2026-02-05 05:45:02.341221 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:45:02.341227 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:45:02.341233 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:45:02.341239 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:45:02.341245 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-05 05:45:02.341251 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:45:02.341258 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:45:02.341264 | orchestrator | 2026-02-05 05:45:02.341271 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 05:45:02.341277 | orchestrator | Thursday 05 February 2026 05:44:42 +0000 (0:00:01.776) 1:04:27.307 ***** 2026-02-05 05:45:02.341284 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:45:02.341291 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:45:02.341297 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:45:02.341304 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:45:02.341310 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-05 05:45:02.341317 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 05:45:02.341323 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:45:02.341330 | orchestrator | 2026-02-05 05:45:02.341336 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-05 05:45:02.341343 | orchestrator | Thursday 05 February 2026 05:44:44 +0000 (0:00:02.156) 1:04:29.464 ***** 2026-02-05 05:45:02.341349 | orchestrator | changed: [testbed-node-4] 2026-02-05 05:45:02.341356 | orchestrator | 2026-02-05 05:45:02.341363 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-05 05:45:02.341369 | orchestrator | Thursday 05 February 2026 05:44:46 +0000 (0:00:01.937) 1:04:31.402 ***** 2026-02-05 05:45:02.341376 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:45:02.341382 | orchestrator | 2026-02-05 05:45:02.341389 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-05 05:45:02.341400 | orchestrator | Thursday 05 February 2026 05:44:49 +0000 (0:00:02.549) 1:04:33.951 ***** 2026-02-05 05:45:02.341406 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:45:02.341412 | orchestrator | 2026-02-05 05:45:02.341418 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:45:02.341423 | orchestrator | Thursday 05 February 2026 05:44:51 +0000 (0:00:01.991) 1:04:35.943 ***** 2026-02-05 05:45:02.341429 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-05 05:45:02.341435 | orchestrator | 2026-02-05 05:45:02.341441 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 05:45:02.341446 | orchestrator | Thursday 05 February 2026 05:44:52 +0000 (0:00:01.098) 1:04:37.041 ***** 2026-02-05 05:45:02.341452 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-05 05:45:02.341458 | orchestrator | 2026-02-05 05:45:02.341464 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 05:45:02.341482 | orchestrator | Thursday 05 February 2026 05:44:53 +0000 (0:00:01.130) 1:04:38.171 ***** 2026-02-05 05:45:02.341488 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.341494 | orchestrator | 2026-02-05 05:45:02.341500 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 05:45:02.341506 | orchestrator | Thursday 05 February 2026 05:44:54 +0000 (0:00:01.083) 1:04:39.255 ***** 2026-02-05 05:45:02.341511 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:02.341517 | orchestrator | 2026-02-05 05:45:02.341523 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 05:45:02.341529 | orchestrator | Thursday 05 February 2026 05:44:56 +0000 (0:00:01.625) 1:04:40.880 ***** 2026-02-05 05:45:02.341536 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:02.341542 | orchestrator | 2026-02-05 05:45:02.341548 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 05:45:02.341554 | orchestrator | Thursday 05 February 2026 05:44:57 +0000 (0:00:01.472) 1:04:42.353 ***** 2026-02-05 05:45:02.341561 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:02.341566 | orchestrator | 2026-02-05 05:45:02.341573 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 05:45:02.341579 | orchestrator | Thursday 05 February 2026 05:44:59 +0000 (0:00:01.527) 1:04:43.881 ***** 2026-02-05 05:45:02.341586 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.341592 | orchestrator | 2026-02-05 05:45:02.341599 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 05:45:02.341606 | orchestrator | Thursday 05 February 2026 05:45:00 +0000 (0:00:01.073) 1:04:44.955 ***** 2026-02-05 05:45:02.341612 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.341619 | orchestrator | 2026-02-05 05:45:02.341625 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 05:45:02.341631 | orchestrator | Thursday 05 February 2026 05:45:01 +0000 (0:00:01.098) 1:04:46.054 ***** 2026-02-05 05:45:02.341637 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:02.341643 | orchestrator | 2026-02-05 05:45:02.341649 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 05:45:02.341663 | orchestrator | Thursday 05 February 2026 05:45:02 +0000 (0:00:01.095) 1:04:47.149 ***** 2026-02-05 05:45:41.381426 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:41.381520 | orchestrator | 2026-02-05 05:45:41.381532 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 05:45:41.381542 | orchestrator | Thursday 05 February 2026 05:45:03 +0000 (0:00:01.583) 1:04:48.733 ***** 2026-02-05 05:45:41.381555 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:41.381567 | orchestrator | 2026-02-05 05:45:41.381579 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 05:45:41.381592 | orchestrator | Thursday 05 February 2026 05:45:05 +0000 (0:00:01.586) 1:04:50.320 ***** 2026-02-05 05:45:41.381630 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.381643 | orchestrator | 2026-02-05 05:45:41.381651 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:45:41.381658 | orchestrator | Thursday 05 February 2026 05:45:06 +0000 (0:00:00.764) 1:04:51.084 ***** 2026-02-05 05:45:41.381666 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.381673 | orchestrator | 2026-02-05 05:45:41.381680 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:45:41.381688 | orchestrator | Thursday 05 February 2026 05:45:07 +0000 (0:00:00.757) 1:04:51.842 ***** 2026-02-05 05:45:41.381695 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:41.381744 | orchestrator | 2026-02-05 05:45:41.381755 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:45:41.381763 | orchestrator | Thursday 05 February 2026 05:45:07 +0000 (0:00:00.762) 1:04:52.605 ***** 2026-02-05 05:45:41.381771 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:41.381778 | orchestrator | 2026-02-05 05:45:41.381785 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:45:41.381793 | orchestrator | Thursday 05 February 2026 05:45:08 +0000 (0:00:00.766) 1:04:53.371 ***** 2026-02-05 05:45:41.381800 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:41.381808 | orchestrator | 2026-02-05 05:45:41.381815 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:45:41.381822 | orchestrator | Thursday 05 February 2026 05:45:09 +0000 (0:00:00.789) 1:04:54.160 ***** 2026-02-05 05:45:41.381830 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.381837 | orchestrator | 2026-02-05 05:45:41.381844 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:45:41.381852 | orchestrator | Thursday 05 February 2026 05:45:10 +0000 (0:00:00.749) 1:04:54.910 ***** 2026-02-05 05:45:41.381859 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.381866 | orchestrator | 2026-02-05 05:45:41.381873 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:45:41.381880 | orchestrator | Thursday 05 February 2026 05:45:10 +0000 (0:00:00.751) 1:04:55.662 ***** 2026-02-05 05:45:41.381888 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.381895 | orchestrator | 2026-02-05 05:45:41.381902 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:45:41.381910 | orchestrator | Thursday 05 February 2026 05:45:11 +0000 (0:00:00.739) 1:04:56.401 ***** 2026-02-05 05:45:41.381917 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:41.381924 | orchestrator | 2026-02-05 05:45:41.381931 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:45:41.381938 | orchestrator | Thursday 05 February 2026 05:45:12 +0000 (0:00:00.760) 1:04:57.161 ***** 2026-02-05 05:45:41.381946 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:41.381953 | orchestrator | 2026-02-05 05:45:41.381960 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:45:41.381968 | orchestrator | Thursday 05 February 2026 05:45:13 +0000 (0:00:00.780) 1:04:57.942 ***** 2026-02-05 05:45:41.381975 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.381983 | orchestrator | 2026-02-05 05:45:41.381991 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:45:41.382000 | orchestrator | Thursday 05 February 2026 05:45:13 +0000 (0:00:00.741) 1:04:58.684 ***** 2026-02-05 05:45:41.382008 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382061 | orchestrator | 2026-02-05 05:45:41.382084 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:45:41.382093 | orchestrator | Thursday 05 February 2026 05:45:14 +0000 (0:00:00.752) 1:04:59.436 ***** 2026-02-05 05:45:41.382101 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382110 | orchestrator | 2026-02-05 05:45:41.382118 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:45:41.382126 | orchestrator | Thursday 05 February 2026 05:45:15 +0000 (0:00:00.762) 1:05:00.198 ***** 2026-02-05 05:45:41.382142 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382150 | orchestrator | 2026-02-05 05:45:41.382159 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:45:41.382168 | orchestrator | Thursday 05 February 2026 05:45:16 +0000 (0:00:00.764) 1:05:00.963 ***** 2026-02-05 05:45:41.382176 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382184 | orchestrator | 2026-02-05 05:45:41.382193 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:45:41.382201 | orchestrator | Thursday 05 February 2026 05:45:16 +0000 (0:00:00.743) 1:05:01.707 ***** 2026-02-05 05:45:41.382209 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382218 | orchestrator | 2026-02-05 05:45:41.382227 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:45:41.382236 | orchestrator | Thursday 05 February 2026 05:45:17 +0000 (0:00:00.739) 1:05:02.446 ***** 2026-02-05 05:45:41.382244 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382253 | orchestrator | 2026-02-05 05:45:41.382261 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:45:41.382270 | orchestrator | Thursday 05 February 2026 05:45:18 +0000 (0:00:00.760) 1:05:03.206 ***** 2026-02-05 05:45:41.382279 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382287 | orchestrator | 2026-02-05 05:45:41.382296 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:45:41.382304 | orchestrator | Thursday 05 February 2026 05:45:19 +0000 (0:00:00.805) 1:05:04.012 ***** 2026-02-05 05:45:41.382312 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382321 | orchestrator | 2026-02-05 05:45:41.382344 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:45:41.382353 | orchestrator | Thursday 05 February 2026 05:45:19 +0000 (0:00:00.756) 1:05:04.768 ***** 2026-02-05 05:45:41.382362 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382370 | orchestrator | 2026-02-05 05:45:41.382378 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:45:41.382387 | orchestrator | Thursday 05 February 2026 05:45:20 +0000 (0:00:00.782) 1:05:05.550 ***** 2026-02-05 05:45:41.382395 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382403 | orchestrator | 2026-02-05 05:45:41.382411 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:45:41.382418 | orchestrator | Thursday 05 February 2026 05:45:21 +0000 (0:00:00.752) 1:05:06.303 ***** 2026-02-05 05:45:41.382425 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382433 | orchestrator | 2026-02-05 05:45:41.382440 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:45:41.382447 | orchestrator | Thursday 05 February 2026 05:45:22 +0000 (0:00:00.759) 1:05:07.062 ***** 2026-02-05 05:45:41.382454 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:41.382462 | orchestrator | 2026-02-05 05:45:41.382469 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:45:41.382476 | orchestrator | Thursday 05 February 2026 05:45:23 +0000 (0:00:01.602) 1:05:08.664 ***** 2026-02-05 05:45:41.382483 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:41.382490 | orchestrator | 2026-02-05 05:45:41.382498 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:45:41.382505 | orchestrator | Thursday 05 February 2026 05:45:25 +0000 (0:00:01.977) 1:05:10.642 ***** 2026-02-05 05:45:41.382512 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-05 05:45:41.382520 | orchestrator | 2026-02-05 05:45:41.382528 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 05:45:41.382535 | orchestrator | Thursday 05 February 2026 05:45:26 +0000 (0:00:01.105) 1:05:11.747 ***** 2026-02-05 05:45:41.382542 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382549 | orchestrator | 2026-02-05 05:45:41.382556 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 05:45:41.382569 | orchestrator | Thursday 05 February 2026 05:45:28 +0000 (0:00:01.125) 1:05:12.873 ***** 2026-02-05 05:45:41.382576 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382583 | orchestrator | 2026-02-05 05:45:41.382590 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 05:45:41.382597 | orchestrator | Thursday 05 February 2026 05:45:29 +0000 (0:00:01.114) 1:05:13.987 ***** 2026-02-05 05:45:41.382605 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 05:45:41.382616 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 05:45:41.382628 | orchestrator | 2026-02-05 05:45:41.382640 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 05:45:41.382652 | orchestrator | Thursday 05 February 2026 05:45:31 +0000 (0:00:01.861) 1:05:15.849 ***** 2026-02-05 05:45:41.382662 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:41.382674 | orchestrator | 2026-02-05 05:45:41.382684 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 05:45:41.382696 | orchestrator | Thursday 05 February 2026 05:45:32 +0000 (0:00:01.470) 1:05:17.319 ***** 2026-02-05 05:45:41.382729 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382740 | orchestrator | 2026-02-05 05:45:41.382751 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 05:45:41.382763 | orchestrator | Thursday 05 February 2026 05:45:33 +0000 (0:00:01.132) 1:05:18.452 ***** 2026-02-05 05:45:41.382773 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382784 | orchestrator | 2026-02-05 05:45:41.382795 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:45:41.382813 | orchestrator | Thursday 05 February 2026 05:45:34 +0000 (0:00:00.785) 1:05:19.237 ***** 2026-02-05 05:45:41.382824 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.382835 | orchestrator | 2026-02-05 05:45:41.382847 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:45:41.382857 | orchestrator | Thursday 05 February 2026 05:45:35 +0000 (0:00:00.751) 1:05:19.988 ***** 2026-02-05 05:45:41.382868 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-05 05:45:41.382879 | orchestrator | 2026-02-05 05:45:41.382890 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 05:45:41.382901 | orchestrator | Thursday 05 February 2026 05:45:36 +0000 (0:00:01.095) 1:05:21.084 ***** 2026-02-05 05:45:41.382913 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:45:41.382925 | orchestrator | 2026-02-05 05:45:41.382936 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 05:45:41.382947 | orchestrator | Thursday 05 February 2026 05:45:37 +0000 (0:00:01.715) 1:05:22.800 ***** 2026-02-05 05:45:41.382958 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 05:45:41.382970 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 05:45:41.382982 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 05:45:41.382994 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.383005 | orchestrator | 2026-02-05 05:45:41.383017 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 05:45:41.383029 | orchestrator | Thursday 05 February 2026 05:45:39 +0000 (0:00:01.122) 1:05:23.922 ***** 2026-02-05 05:45:41.383036 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.383043 | orchestrator | 2026-02-05 05:45:41.383050 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 05:45:41.383058 | orchestrator | Thursday 05 February 2026 05:45:40 +0000 (0:00:01.145) 1:05:25.068 ***** 2026-02-05 05:45:41.383065 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:45:41.383072 | orchestrator | 2026-02-05 05:45:41.383088 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 05:46:24.110908 | orchestrator | Thursday 05 February 2026 05:45:41 +0000 (0:00:01.122) 1:05:26.190 ***** 2026-02-05 05:46:24.111024 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111037 | orchestrator | 2026-02-05 05:46:24.111045 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 05:46:24.111052 | orchestrator | Thursday 05 February 2026 05:45:42 +0000 (0:00:01.117) 1:05:27.308 ***** 2026-02-05 05:46:24.111059 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111065 | orchestrator | 2026-02-05 05:46:24.111071 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 05:46:24.111078 | orchestrator | Thursday 05 February 2026 05:45:43 +0000 (0:00:01.116) 1:05:28.425 ***** 2026-02-05 05:46:24.111084 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111090 | orchestrator | 2026-02-05 05:46:24.111096 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:46:24.111102 | orchestrator | Thursday 05 February 2026 05:45:44 +0000 (0:00:00.770) 1:05:29.195 ***** 2026-02-05 05:46:24.111108 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:46:24.111116 | orchestrator | 2026-02-05 05:46:24.111122 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:46:24.111129 | orchestrator | Thursday 05 February 2026 05:45:46 +0000 (0:00:02.208) 1:05:31.404 ***** 2026-02-05 05:46:24.111134 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:46:24.111140 | orchestrator | 2026-02-05 05:46:24.111146 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:46:24.111152 | orchestrator | Thursday 05 February 2026 05:45:47 +0000 (0:00:00.769) 1:05:32.173 ***** 2026-02-05 05:46:24.111158 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-05 05:46:24.111165 | orchestrator | 2026-02-05 05:46:24.111171 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 05:46:24.111178 | orchestrator | Thursday 05 February 2026 05:45:48 +0000 (0:00:01.110) 1:05:33.284 ***** 2026-02-05 05:46:24.111185 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111189 | orchestrator | 2026-02-05 05:46:24.111193 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 05:46:24.111197 | orchestrator | Thursday 05 February 2026 05:45:49 +0000 (0:00:01.117) 1:05:34.401 ***** 2026-02-05 05:46:24.111201 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111205 | orchestrator | 2026-02-05 05:46:24.111208 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 05:46:24.111212 | orchestrator | Thursday 05 February 2026 05:45:50 +0000 (0:00:01.125) 1:05:35.527 ***** 2026-02-05 05:46:24.111216 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111220 | orchestrator | 2026-02-05 05:46:24.111223 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 05:46:24.111227 | orchestrator | Thursday 05 February 2026 05:45:51 +0000 (0:00:01.115) 1:05:36.642 ***** 2026-02-05 05:46:24.111231 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111235 | orchestrator | 2026-02-05 05:46:24.111239 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 05:46:24.111243 | orchestrator | Thursday 05 February 2026 05:45:52 +0000 (0:00:01.112) 1:05:37.754 ***** 2026-02-05 05:46:24.111246 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111250 | orchestrator | 2026-02-05 05:46:24.111254 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 05:46:24.111258 | orchestrator | Thursday 05 February 2026 05:45:54 +0000 (0:00:01.183) 1:05:38.937 ***** 2026-02-05 05:46:24.111261 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111265 | orchestrator | 2026-02-05 05:46:24.111269 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 05:46:24.111273 | orchestrator | Thursday 05 February 2026 05:45:55 +0000 (0:00:01.130) 1:05:40.067 ***** 2026-02-05 05:46:24.111277 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111280 | orchestrator | 2026-02-05 05:46:24.111295 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 05:46:24.111303 | orchestrator | Thursday 05 February 2026 05:45:56 +0000 (0:00:01.152) 1:05:41.220 ***** 2026-02-05 05:46:24.111307 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111311 | orchestrator | 2026-02-05 05:46:24.111314 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 05:46:24.111318 | orchestrator | Thursday 05 February 2026 05:45:57 +0000 (0:00:01.119) 1:05:42.340 ***** 2026-02-05 05:46:24.111322 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:46:24.111326 | orchestrator | 2026-02-05 05:46:24.111330 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:46:24.111333 | orchestrator | Thursday 05 February 2026 05:45:58 +0000 (0:00:00.787) 1:05:43.128 ***** 2026-02-05 05:46:24.111337 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-05 05:46:24.111342 | orchestrator | 2026-02-05 05:46:24.111346 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 05:46:24.111349 | orchestrator | Thursday 05 February 2026 05:45:59 +0000 (0:00:01.184) 1:05:44.312 ***** 2026-02-05 05:46:24.111353 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-05 05:46:24.111358 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-05 05:46:24.111362 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-05 05:46:24.111365 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-05 05:46:24.111369 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-05 05:46:24.111373 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-05 05:46:24.111377 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-05 05:46:24.111381 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-05 05:46:24.111384 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 05:46:24.111388 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 05:46:24.111392 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 05:46:24.111408 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 05:46:24.111412 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 05:46:24.111416 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 05:46:24.111420 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-05 05:46:24.111424 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-05 05:46:24.111428 | orchestrator | 2026-02-05 05:46:24.111432 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:46:24.111436 | orchestrator | Thursday 05 February 2026 05:46:06 +0000 (0:00:06.533) 1:05:50.846 ***** 2026-02-05 05:46:24.111439 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-05 05:46:24.111443 | orchestrator | 2026-02-05 05:46:24.111447 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-05 05:46:24.111451 | orchestrator | Thursday 05 February 2026 05:46:07 +0000 (0:00:01.116) 1:05:51.963 ***** 2026-02-05 05:46:24.111456 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:46:24.111461 | orchestrator | 2026-02-05 05:46:24.111466 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-05 05:46:24.111470 | orchestrator | Thursday 05 February 2026 05:46:08 +0000 (0:00:01.510) 1:05:53.473 ***** 2026-02-05 05:46:24.111475 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:46:24.111480 | orchestrator | 2026-02-05 05:46:24.111484 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:46:24.111488 | orchestrator | Thursday 05 February 2026 05:46:10 +0000 (0:00:01.634) 1:05:55.108 ***** 2026-02-05 05:46:24.111496 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111501 | orchestrator | 2026-02-05 05:46:24.111506 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:46:24.111510 | orchestrator | Thursday 05 February 2026 05:46:11 +0000 (0:00:00.766) 1:05:55.874 ***** 2026-02-05 05:46:24.111515 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111519 | orchestrator | 2026-02-05 05:46:24.111524 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:46:24.111528 | orchestrator | Thursday 05 February 2026 05:46:11 +0000 (0:00:00.784) 1:05:56.658 ***** 2026-02-05 05:46:24.111532 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111537 | orchestrator | 2026-02-05 05:46:24.111541 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:46:24.111546 | orchestrator | Thursday 05 February 2026 05:46:12 +0000 (0:00:00.762) 1:05:57.421 ***** 2026-02-05 05:46:24.111550 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111555 | orchestrator | 2026-02-05 05:46:24.111559 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:46:24.111563 | orchestrator | Thursday 05 February 2026 05:46:13 +0000 (0:00:00.812) 1:05:58.234 ***** 2026-02-05 05:46:24.111568 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111572 | orchestrator | 2026-02-05 05:46:24.111577 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:46:24.111581 | orchestrator | Thursday 05 February 2026 05:46:14 +0000 (0:00:00.829) 1:05:59.064 ***** 2026-02-05 05:46:24.111586 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111591 | orchestrator | 2026-02-05 05:46:24.111595 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:46:24.111600 | orchestrator | Thursday 05 February 2026 05:46:14 +0000 (0:00:00.755) 1:05:59.819 ***** 2026-02-05 05:46:24.111606 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111611 | orchestrator | 2026-02-05 05:46:24.111615 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:46:24.111619 | orchestrator | Thursday 05 February 2026 05:46:15 +0000 (0:00:00.792) 1:06:00.612 ***** 2026-02-05 05:46:24.111623 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111627 | orchestrator | 2026-02-05 05:46:24.111631 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:46:24.111635 | orchestrator | Thursday 05 February 2026 05:46:16 +0000 (0:00:00.836) 1:06:01.448 ***** 2026-02-05 05:46:24.111638 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111642 | orchestrator | 2026-02-05 05:46:24.111646 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:46:24.111650 | orchestrator | Thursday 05 February 2026 05:46:17 +0000 (0:00:00.807) 1:06:02.255 ***** 2026-02-05 05:46:24.111654 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111657 | orchestrator | 2026-02-05 05:46:24.111661 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:46:24.111665 | orchestrator | Thursday 05 February 2026 05:46:18 +0000 (0:00:00.788) 1:06:03.043 ***** 2026-02-05 05:46:24.111669 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:46:24.111702 | orchestrator | 2026-02-05 05:46:24.111708 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:46:24.111711 | orchestrator | Thursday 05 February 2026 05:46:19 +0000 (0:00:00.789) 1:06:03.833 ***** 2026-02-05 05:46:24.111715 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-05 05:46:24.111720 | orchestrator | 2026-02-05 05:46:24.111726 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:46:24.111732 | orchestrator | Thursday 05 February 2026 05:46:23 +0000 (0:00:04.268) 1:06:08.102 ***** 2026-02-05 05:46:24.111738 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:46:24.111748 | orchestrator | 2026-02-05 05:46:24.111758 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:47:05.143456 | orchestrator | Thursday 05 February 2026 05:46:24 +0000 (0:00:00.822) 1:06:08.924 ***** 2026-02-05 05:47:05.143542 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-05 05:47:05.143553 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-05 05:47:05.143561 | orchestrator | 2026-02-05 05:47:05.143567 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:47:05.143573 | orchestrator | Thursday 05 February 2026 05:46:28 +0000 (0:00:04.490) 1:06:13.415 ***** 2026-02-05 05:47:05.143578 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:47:05.143585 | orchestrator | 2026-02-05 05:47:05.143590 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:47:05.143596 | orchestrator | Thursday 05 February 2026 05:46:29 +0000 (0:00:00.772) 1:06:14.188 ***** 2026-02-05 05:47:05.143601 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:47:05.143606 | orchestrator | 2026-02-05 05:47:05.143612 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:47:05.143618 | orchestrator | Thursday 05 February 2026 05:46:30 +0000 (0:00:00.754) 1:06:14.943 ***** 2026-02-05 05:47:05.143623 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:47:05.143628 | orchestrator | 2026-02-05 05:47:05.143634 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:47:05.143639 | orchestrator | Thursday 05 February 2026 05:46:30 +0000 (0:00:00.814) 1:06:15.757 ***** 2026-02-05 05:47:05.143645 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:47:05.143650 | orchestrator | 2026-02-05 05:47:05.143704 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:47:05.143713 | orchestrator | Thursday 05 February 2026 05:46:31 +0000 (0:00:00.785) 1:06:16.543 ***** 2026-02-05 05:47:05.143721 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:47:05.143730 | orchestrator | 2026-02-05 05:47:05.143737 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:47:05.143743 | orchestrator | Thursday 05 February 2026 05:46:32 +0000 (0:00:00.785) 1:06:17.328 ***** 2026-02-05 05:47:05.143748 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:47:05.143754 | orchestrator | 2026-02-05 05:47:05.143760 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:47:05.143765 | orchestrator | Thursday 05 February 2026 05:46:33 +0000 (0:00:00.919) 1:06:18.248 ***** 2026-02-05 05:47:05.143771 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 05:47:05.143776 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 05:47:05.143781 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 05:47:05.143787 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:47:05.143792 | orchestrator | 2026-02-05 05:47:05.143797 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:47:05.143802 | orchestrator | Thursday 05 February 2026 05:46:34 +0000 (0:00:01.365) 1:06:19.613 ***** 2026-02-05 05:47:05.143820 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 05:47:05.143826 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 05:47:05.143831 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 05:47:05.143836 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:47:05.143856 | orchestrator | 2026-02-05 05:47:05.143861 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:47:05.143867 | orchestrator | Thursday 05 February 2026 05:46:35 +0000 (0:00:01.055) 1:06:20.669 ***** 2026-02-05 05:47:05.143872 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 05:47:05.143877 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 05:47:05.143882 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 05:47:05.143887 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:47:05.143893 | orchestrator | 2026-02-05 05:47:05.143898 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:47:05.143903 | orchestrator | Thursday 05 February 2026 05:46:36 +0000 (0:00:01.039) 1:06:21.709 ***** 2026-02-05 05:47:05.143908 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:47:05.143913 | orchestrator | 2026-02-05 05:47:05.143919 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:47:05.143924 | orchestrator | Thursday 05 February 2026 05:46:37 +0000 (0:00:00.789) 1:06:22.498 ***** 2026-02-05 05:47:05.143929 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 05:47:05.143934 | orchestrator | 2026-02-05 05:47:05.143939 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:47:05.143945 | orchestrator | Thursday 05 February 2026 05:46:38 +0000 (0:00:01.014) 1:06:23.512 ***** 2026-02-05 05:47:05.143950 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:47:05.143955 | orchestrator | 2026-02-05 05:47:05.143960 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-05 05:47:05.143965 | orchestrator | Thursday 05 February 2026 05:46:40 +0000 (0:00:01.415) 1:06:24.927 ***** 2026-02-05 05:47:05.143970 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-02-05 05:47:05.143975 | orchestrator | 2026-02-05 05:47:05.143992 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-05 05:47:05.143998 | orchestrator | Thursday 05 February 2026 05:46:41 +0000 (0:00:01.092) 1:06:26.020 ***** 2026-02-05 05:47:05.144003 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:47:05.144008 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 05:47:05.144014 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 05:47:05.144019 | orchestrator | 2026-02-05 05:47:05.144025 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:47:05.144031 | orchestrator | Thursday 05 February 2026 05:46:44 +0000 (0:00:03.399) 1:06:29.420 ***** 2026-02-05 05:47:05.144038 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-05 05:47:05.144044 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 05:47:05.144050 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:47:05.144056 | orchestrator | 2026-02-05 05:47:05.144063 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-05 05:47:05.144069 | orchestrator | Thursday 05 February 2026 05:46:46 +0000 (0:00:02.053) 1:06:31.474 ***** 2026-02-05 05:47:05.144075 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:47:05.144081 | orchestrator | 2026-02-05 05:47:05.144088 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-05 05:47:05.144094 | orchestrator | Thursday 05 February 2026 05:46:47 +0000 (0:00:00.784) 1:06:32.259 ***** 2026-02-05 05:47:05.144100 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-02-05 05:47:05.144110 | orchestrator | 2026-02-05 05:47:05.144121 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-05 05:47:05.144132 | orchestrator | Thursday 05 February 2026 05:46:48 +0000 (0:00:01.240) 1:06:33.500 ***** 2026-02-05 05:47:05.144142 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:47:05.144152 | orchestrator | 2026-02-05 05:47:05.144165 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-05 05:47:05.144173 | orchestrator | Thursday 05 February 2026 05:46:50 +0000 (0:00:01.664) 1:06:35.165 ***** 2026-02-05 05:47:05.144181 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:47:05.144189 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-05 05:47:05.144197 | orchestrator | 2026-02-05 05:47:05.144205 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-05 05:47:05.144213 | orchestrator | Thursday 05 February 2026 05:46:55 +0000 (0:00:05.571) 1:06:40.737 ***** 2026-02-05 05:47:05.144221 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:47:05.144230 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 05:47:05.144239 | orchestrator | 2026-02-05 05:47:05.144246 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:47:05.144255 | orchestrator | Thursday 05 February 2026 05:46:59 +0000 (0:00:03.305) 1:06:44.042 ***** 2026-02-05 05:47:05.144264 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-05 05:47:05.144269 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:47:05.144274 | orchestrator | 2026-02-05 05:47:05.144279 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-05 05:47:05.144284 | orchestrator | Thursday 05 February 2026 05:47:00 +0000 (0:00:01.633) 1:06:45.676 ***** 2026-02-05 05:47:05.144289 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-02-05 05:47:05.144295 | orchestrator | 2026-02-05 05:47:05.144304 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-05 05:47:05.144310 | orchestrator | Thursday 05 February 2026 05:47:01 +0000 (0:00:01.112) 1:06:46.789 ***** 2026-02-05 05:47:05.144315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:47:05.144322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:47:05.144331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:47:05.144343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:47:05.144352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:47:05.144360 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:47:05.144368 | orchestrator | 2026-02-05 05:47:05.144376 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-05 05:47:05.144384 | orchestrator | Thursday 05 February 2026 05:47:03 +0000 (0:00:01.581) 1:06:48.371 ***** 2026-02-05 05:47:05.144391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:47:05.144400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:47:05.144408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:47:05.144424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:48:15.404012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:48:15.404103 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:48:15.404113 | orchestrator | 2026-02-05 05:48:15.404121 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-05 05:48:15.404146 | orchestrator | Thursday 05 February 2026 05:47:05 +0000 (0:00:01.580) 1:06:49.951 ***** 2026-02-05 05:48:15.404153 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:48:15.404160 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:48:15.404166 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:48:15.404172 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:48:15.404180 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:48:15.404185 | orchestrator | 2026-02-05 05:48:15.404191 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-05 05:48:15.404197 | orchestrator | Thursday 05 February 2026 05:47:41 +0000 (0:00:35.917) 1:07:25.869 ***** 2026-02-05 05:48:15.404202 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:48:15.404208 | orchestrator | 2026-02-05 05:48:15.404213 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-05 05:48:15.404219 | orchestrator | Thursday 05 February 2026 05:47:41 +0000 (0:00:00.746) 1:07:26.615 ***** 2026-02-05 05:48:15.404224 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:48:15.404249 | orchestrator | 2026-02-05 05:48:15.404255 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-05 05:48:15.404260 | orchestrator | Thursday 05 February 2026 05:47:42 +0000 (0:00:00.764) 1:07:27.379 ***** 2026-02-05 05:48:15.404266 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-02-05 05:48:15.404272 | orchestrator | 2026-02-05 05:48:15.404278 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-05 05:48:15.404283 | orchestrator | Thursday 05 February 2026 05:47:43 +0000 (0:00:01.189) 1:07:28.569 ***** 2026-02-05 05:48:15.404288 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-02-05 05:48:15.404294 | orchestrator | 2026-02-05 05:48:15.404299 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-05 05:48:15.404305 | orchestrator | Thursday 05 February 2026 05:47:44 +0000 (0:00:01.098) 1:07:29.668 ***** 2026-02-05 05:48:15.404310 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:48:15.404317 | orchestrator | 2026-02-05 05:48:15.404322 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-05 05:48:15.404328 | orchestrator | Thursday 05 February 2026 05:47:46 +0000 (0:00:02.061) 1:07:31.729 ***** 2026-02-05 05:48:15.404333 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:48:15.404339 | orchestrator | 2026-02-05 05:48:15.404355 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-05 05:48:15.404361 | orchestrator | Thursday 05 February 2026 05:47:48 +0000 (0:00:01.969) 1:07:33.698 ***** 2026-02-05 05:48:15.404367 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:48:15.404372 | orchestrator | 2026-02-05 05:48:15.404377 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-05 05:48:15.404383 | orchestrator | Thursday 05 February 2026 05:47:51 +0000 (0:00:02.330) 1:07:36.029 ***** 2026-02-05 05:48:15.404389 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 05:48:15.404394 | orchestrator | 2026-02-05 05:48:15.404400 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-05 05:48:15.404405 | orchestrator | 2026-02-05 05:48:15.404411 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:48:15.404422 | orchestrator | Thursday 05 February 2026 05:47:54 +0000 (0:00:02.876) 1:07:38.906 ***** 2026-02-05 05:48:15.404427 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-05 05:48:15.404433 | orchestrator | 2026-02-05 05:48:15.404438 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 05:48:15.404444 | orchestrator | Thursday 05 February 2026 05:47:55 +0000 (0:00:01.088) 1:07:39.994 ***** 2026-02-05 05:48:15.404449 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:15.404455 | orchestrator | 2026-02-05 05:48:15.404460 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 05:48:15.404465 | orchestrator | Thursday 05 February 2026 05:47:56 +0000 (0:00:01.463) 1:07:41.457 ***** 2026-02-05 05:48:15.404471 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:15.404476 | orchestrator | 2026-02-05 05:48:15.404482 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:48:15.404487 | orchestrator | Thursday 05 February 2026 05:47:57 +0000 (0:00:01.101) 1:07:42.559 ***** 2026-02-05 05:48:15.404493 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:15.404498 | orchestrator | 2026-02-05 05:48:15.404504 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:48:15.404510 | orchestrator | Thursday 05 February 2026 05:47:59 +0000 (0:00:01.437) 1:07:43.997 ***** 2026-02-05 05:48:15.404515 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:15.404520 | orchestrator | 2026-02-05 05:48:15.404538 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 05:48:15.404544 | orchestrator | Thursday 05 February 2026 05:48:00 +0000 (0:00:01.117) 1:07:45.114 ***** 2026-02-05 05:48:15.404551 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:15.404557 | orchestrator | 2026-02-05 05:48:15.404564 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 05:48:15.404570 | orchestrator | Thursday 05 February 2026 05:48:01 +0000 (0:00:01.108) 1:07:46.222 ***** 2026-02-05 05:48:15.404577 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:15.404583 | orchestrator | 2026-02-05 05:48:15.404589 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 05:48:15.404595 | orchestrator | Thursday 05 February 2026 05:48:02 +0000 (0:00:01.111) 1:07:47.334 ***** 2026-02-05 05:48:15.404602 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:15.404608 | orchestrator | 2026-02-05 05:48:15.404615 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 05:48:15.404656 | orchestrator | Thursday 05 February 2026 05:48:03 +0000 (0:00:01.110) 1:07:48.445 ***** 2026-02-05 05:48:15.404662 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:15.404668 | orchestrator | 2026-02-05 05:48:15.404675 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 05:48:15.404681 | orchestrator | Thursday 05 February 2026 05:48:04 +0000 (0:00:01.094) 1:07:49.539 ***** 2026-02-05 05:48:15.404687 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:48:15.404694 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:48:15.404700 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:48:15.404707 | orchestrator | 2026-02-05 05:48:15.404713 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 05:48:15.404719 | orchestrator | Thursday 05 February 2026 05:48:06 +0000 (0:00:01.678) 1:07:51.218 ***** 2026-02-05 05:48:15.404726 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:15.404732 | orchestrator | 2026-02-05 05:48:15.404738 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 05:48:15.404744 | orchestrator | Thursday 05 February 2026 05:48:07 +0000 (0:00:01.229) 1:07:52.448 ***** 2026-02-05 05:48:15.404751 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:48:15.404757 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:48:15.404768 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:48:15.404775 | orchestrator | 2026-02-05 05:48:15.404781 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 05:48:15.404787 | orchestrator | Thursday 05 February 2026 05:48:10 +0000 (0:00:03.269) 1:07:55.718 ***** 2026-02-05 05:48:15.404793 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 05:48:15.404800 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 05:48:15.404807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 05:48:15.404814 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:15.404820 | orchestrator | 2026-02-05 05:48:15.404827 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 05:48:15.404833 | orchestrator | Thursday 05 February 2026 05:48:12 +0000 (0:00:01.388) 1:07:57.107 ***** 2026-02-05 05:48:15.404844 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 05:48:15.404853 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 05:48:15.404860 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 05:48:15.404867 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:15.404873 | orchestrator | 2026-02-05 05:48:15.404880 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 05:48:15.404889 | orchestrator | Thursday 05 February 2026 05:48:14 +0000 (0:00:01.965) 1:07:59.072 ***** 2026-02-05 05:48:15.404902 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:15.404919 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:34.364566 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:34.364733 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:34.364746 | orchestrator | 2026-02-05 05:48:34.364755 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 05:48:34.364763 | orchestrator | Thursday 05 February 2026 05:48:15 +0000 (0:00:01.143) 1:08:00.216 ***** 2026-02-05 05:48:34.364772 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd1923db1c6ca', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 05:48:08.196973', 'end': '2026-02-05 05:48:08.254350', 'delta': '0:00:00.057377', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1923db1c6ca'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 05:48:34.364808 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a31ed792a8ac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 05:48:08.744249', 'end': '2026-02-05 05:48:08.788116', 'delta': '0:00:00.043867', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a31ed792a8ac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 05:48:34.364829 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9163e99c5c4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 05:48:09.616655', 'end': '2026-02-05 05:48:09.663549', 'delta': '0:00:00.046894', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9163e99c5c4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 05:48:34.364836 | orchestrator | 2026-02-05 05:48:34.364843 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 05:48:34.364849 | orchestrator | Thursday 05 February 2026 05:48:16 +0000 (0:00:01.171) 1:08:01.387 ***** 2026-02-05 05:48:34.364855 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:34.364862 | orchestrator | 2026-02-05 05:48:34.364868 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 05:48:34.364875 | orchestrator | Thursday 05 February 2026 05:48:18 +0000 (0:00:01.609) 1:08:02.996 ***** 2026-02-05 05:48:34.364881 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:34.364887 | orchestrator | 2026-02-05 05:48:34.364893 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 05:48:34.364899 | orchestrator | Thursday 05 February 2026 05:48:19 +0000 (0:00:01.268) 1:08:04.265 ***** 2026-02-05 05:48:34.364905 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:34.364911 | orchestrator | 2026-02-05 05:48:34.364918 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 05:48:34.364924 | orchestrator | Thursday 05 February 2026 05:48:20 +0000 (0:00:01.138) 1:08:05.403 ***** 2026-02-05 05:48:34.364931 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-05 05:48:34.364938 | orchestrator | 2026-02-05 05:48:34.364943 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:48:34.364949 | orchestrator | Thursday 05 February 2026 05:48:22 +0000 (0:00:02.025) 1:08:07.429 ***** 2026-02-05 05:48:34.364955 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:34.364961 | orchestrator | 2026-02-05 05:48:34.364967 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 05:48:34.364974 | orchestrator | Thursday 05 February 2026 05:48:23 +0000 (0:00:01.172) 1:08:08.602 ***** 2026-02-05 05:48:34.364997 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:34.365004 | orchestrator | 2026-02-05 05:48:34.365011 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 05:48:34.365024 | orchestrator | Thursday 05 February 2026 05:48:24 +0000 (0:00:01.133) 1:08:09.736 ***** 2026-02-05 05:48:34.365030 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:34.365036 | orchestrator | 2026-02-05 05:48:34.365042 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 05:48:34.365048 | orchestrator | Thursday 05 February 2026 05:48:26 +0000 (0:00:01.226) 1:08:10.962 ***** 2026-02-05 05:48:34.365054 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:34.365059 | orchestrator | 2026-02-05 05:48:34.365065 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 05:48:34.365071 | orchestrator | Thursday 05 February 2026 05:48:27 +0000 (0:00:01.129) 1:08:12.092 ***** 2026-02-05 05:48:34.365076 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:34.365082 | orchestrator | 2026-02-05 05:48:34.365088 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 05:48:34.365093 | orchestrator | Thursday 05 February 2026 05:48:28 +0000 (0:00:01.121) 1:08:13.214 ***** 2026-02-05 05:48:34.365099 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:34.365105 | orchestrator | 2026-02-05 05:48:34.365111 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 05:48:34.365117 | orchestrator | Thursday 05 February 2026 05:48:29 +0000 (0:00:01.199) 1:08:14.414 ***** 2026-02-05 05:48:34.365124 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:34.365129 | orchestrator | 2026-02-05 05:48:34.365135 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 05:48:34.365141 | orchestrator | Thursday 05 February 2026 05:48:30 +0000 (0:00:01.098) 1:08:15.513 ***** 2026-02-05 05:48:34.365147 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:34.365153 | orchestrator | 2026-02-05 05:48:34.365159 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 05:48:34.365166 | orchestrator | Thursday 05 February 2026 05:48:31 +0000 (0:00:01.157) 1:08:16.671 ***** 2026-02-05 05:48:34.365173 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:34.365179 | orchestrator | 2026-02-05 05:48:34.365185 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 05:48:34.365194 | orchestrator | Thursday 05 February 2026 05:48:32 +0000 (0:00:01.094) 1:08:17.766 ***** 2026-02-05 05:48:34.365199 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:34.365205 | orchestrator | 2026-02-05 05:48:34.365211 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 05:48:34.365217 | orchestrator | Thursday 05 February 2026 05:48:34 +0000 (0:00:01.203) 1:08:18.970 ***** 2026-02-05 05:48:34.365224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:48:34.365239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565', 'dm-uuid-LVM-vN6SqmnZs4OEgki7muUGb3CX2rpgO9JjiNwKDjdU3U6P9o8RLpsOeeot25aaAr4C'], 'uuids': ['85a8f83c-eeb5-49b7-8fd6-02ada4ea1f5a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C']}})  2026-02-05 05:48:34.365247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc', 'scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b9ba281', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:48:34.365269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-s8rEz7-ppR5-3mX9-9SVK-AT2X-wlWd-qt0ARf', 'scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9', 'scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be']}})  2026-02-05 05:48:35.458973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:48:35.459065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:48:35.459075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-05 05:48:35.459083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:48:35.459106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS', 'dm-uuid-CRYPT-LUKS2-39f72013c68f483e935747f3038f3162-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:48:35.459112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:48:35.459139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be', 'dm-uuid-LVM-2cW2aDbCF7Qvd1HDyT5MPDeJBzJFIyWajOrxUSy4sPZH0JqYli0dE22RqjUl99AS'], 'uuids': ['39f72013-c68f-483e-9357-47f3038f3162'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS']}})  2026-02-05 05:48:35.459159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-j8R0nG-W0YC-WK20-RGGA-JPgY-3scR-ZQIgrc', 'scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49', 'scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565']}})  2026-02-05 05:48:35.459166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:48:35.459179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62c048b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-05 05:48:35.459191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:48:35.459197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-05 05:48:35.459207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C', 'dm-uuid-CRYPT-LUKS2-85a8f83ceeb549b78fd602ada4ea1f5a-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-05 05:48:35.673378 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:35.673467 | orchestrator | 2026-02-05 05:48:35.673480 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 05:48:35.673491 | orchestrator | Thursday 05 February 2026 05:48:35 +0000 (0:00:01.306) 1:08:20.276 ***** 2026-02-05 05:48:35.673503 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:35.673515 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565', 'dm-uuid-LVM-vN6SqmnZs4OEgki7muUGb3CX2rpgO9JjiNwKDjdU3U6P9o8RLpsOeeot25aaAr4C'], 'uuids': ['85a8f83c-eeb5-49b7-8fd6-02ada4ea1f5a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:35.673543 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc', 'scsi-SQEMU_QEMU_HARDDISK_1b9ba281-c2e6-4817-9dab-91e9708a21dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b9ba281', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:35.673575 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-s8rEz7-ppR5-3mX9-9SVK-AT2X-wlWd-qt0ARf', 'scsi-0QEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9', 'scsi-SQEMU_QEMU_HARDDISK_93de9619-194c-45d0-9020-848f0c7631a9'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:35.673604 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:35.673685 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:35.673695 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-05-01-22-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:35.673702 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:35.673718 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS', 'dm-uuid-CRYPT-LUKS2-39f72013c68f483e935747f3038f3162-jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:35.673723 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:35.673763 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--27670a2c--7838--5627--a951--e8a6d97fe4be-osd--block--27670a2c--7838--5627--a951--e8a6d97fe4be', 'dm-uuid-LVM-2cW2aDbCF7Qvd1HDyT5MPDeJBzJFIyWajOrxUSy4sPZH0JqYli0dE22RqjUl99AS'], 'uuids': ['39f72013-c68f-483e-9357-47f3038f3162'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '93de9619', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jOrxUS-y4sP-ZH0J-qYli-0dE2-2Rqj-Ul99AS']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:48.652589 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-j8R0nG-W0YC-WK20-RGGA-JPgY-3scR-ZQIgrc', 'scsi-0QEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49', 'scsi-SQEMU_QEMU_HARDDISK_e3013df6-5c5e-4503-84f9-a700edabdb49'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3013df6', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--51c61bf5--abad--542f--be8e--c69d5e860565-osd--block--51c61bf5--abad--542f--be8e--c69d5e860565']}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:48.652745 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:48.652787 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62c048b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_62c048b1-5f64-433a-b6c9-e2210ab077fa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:48.652810 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:48.652817 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:48.652823 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C', 'dm-uuid-CRYPT-LUKS2-85a8f83ceeb549b78fd602ada4ea1f5a-iNwKDj-dU3U-6P9o-8RLp-sOee-ot25-aaAr4C'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-05 05:48:48.652839 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:48.652847 | orchestrator | 2026-02-05 05:48:48.652854 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 05:48:48.652861 | orchestrator | Thursday 05 February 2026 05:48:36 +0000 (0:00:01.389) 1:08:21.665 ***** 2026-02-05 05:48:48.652866 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:48.652873 | orchestrator | 2026-02-05 05:48:48.652878 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 05:48:48.652884 | orchestrator | Thursday 05 February 2026 05:48:38 +0000 (0:00:01.487) 1:08:23.153 ***** 2026-02-05 05:48:48.652889 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:48.652895 | orchestrator | 2026-02-05 05:48:48.652901 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:48:48.652907 | orchestrator | Thursday 05 February 2026 05:48:39 +0000 (0:00:01.111) 1:08:24.264 ***** 2026-02-05 05:48:48.652912 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:48:48.652918 | orchestrator | 2026-02-05 05:48:48.652923 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:48:48.652929 | orchestrator | Thursday 05 February 2026 05:48:40 +0000 (0:00:01.499) 1:08:25.764 ***** 2026-02-05 05:48:48.652934 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:48.652940 | orchestrator | 2026-02-05 05:48:48.652945 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 05:48:48.652954 | orchestrator | Thursday 05 February 2026 05:48:42 +0000 (0:00:01.115) 1:08:26.879 ***** 2026-02-05 05:48:48.652963 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:48.652972 | orchestrator | 2026-02-05 05:48:48.652981 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 05:48:48.652989 | orchestrator | Thursday 05 February 2026 05:48:43 +0000 (0:00:01.212) 1:08:28.092 ***** 2026-02-05 05:48:48.652997 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:48.653005 | orchestrator | 2026-02-05 05:48:48.653014 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 05:48:48.653023 | orchestrator | Thursday 05 February 2026 05:48:44 +0000 (0:00:01.140) 1:08:29.232 ***** 2026-02-05 05:48:48.653032 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-05 05:48:48.653042 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-05 05:48:48.653051 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-05 05:48:48.653058 | orchestrator | 2026-02-05 05:48:48.653064 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 05:48:48.653069 | orchestrator | Thursday 05 February 2026 05:48:46 +0000 (0:00:01.937) 1:08:31.170 ***** 2026-02-05 05:48:48.653074 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 05:48:48.653080 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 05:48:48.653085 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 05:48:48.653091 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:48:48.653096 | orchestrator | 2026-02-05 05:48:48.653101 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 05:48:48.653107 | orchestrator | Thursday 05 February 2026 05:48:47 +0000 (0:00:01.179) 1:08:32.350 ***** 2026-02-05 05:48:48.653112 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-05 05:48:48.653118 | orchestrator | 2026-02-05 05:48:48.653129 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:49:29.950971 | orchestrator | Thursday 05 February 2026 05:48:48 +0000 (0:00:01.112) 1:08:33.463 ***** 2026-02-05 05:49:29.951069 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951077 | orchestrator | 2026-02-05 05:49:29.951082 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:49:29.951086 | orchestrator | Thursday 05 February 2026 05:48:49 +0000 (0:00:01.125) 1:08:34.588 ***** 2026-02-05 05:49:29.951090 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951094 | orchestrator | 2026-02-05 05:49:29.951099 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:49:29.951103 | orchestrator | Thursday 05 February 2026 05:48:50 +0000 (0:00:01.150) 1:08:35.739 ***** 2026-02-05 05:49:29.951106 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951110 | orchestrator | 2026-02-05 05:49:29.951114 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:49:29.951118 | orchestrator | Thursday 05 February 2026 05:48:52 +0000 (0:00:01.108) 1:08:36.847 ***** 2026-02-05 05:49:29.951121 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:49:29.951126 | orchestrator | 2026-02-05 05:49:29.951130 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:49:29.951134 | orchestrator | Thursday 05 February 2026 05:48:53 +0000 (0:00:01.217) 1:08:38.064 ***** 2026-02-05 05:49:29.951138 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:49:29.951142 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:49:29.951146 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:49:29.951149 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951153 | orchestrator | 2026-02-05 05:49:29.951157 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:49:29.951161 | orchestrator | Thursday 05 February 2026 05:48:54 +0000 (0:00:01.364) 1:08:39.429 ***** 2026-02-05 05:49:29.951164 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:49:29.951168 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:49:29.951172 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:49:29.951175 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951179 | orchestrator | 2026-02-05 05:49:29.951183 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:49:29.951186 | orchestrator | Thursday 05 February 2026 05:48:56 +0000 (0:00:01.405) 1:08:40.835 ***** 2026-02-05 05:49:29.951190 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:49:29.951204 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:49:29.951208 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:49:29.951212 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951216 | orchestrator | 2026-02-05 05:49:29.951219 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:49:29.951223 | orchestrator | Thursday 05 February 2026 05:48:57 +0000 (0:00:01.398) 1:08:42.234 ***** 2026-02-05 05:49:29.951227 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:49:29.951231 | orchestrator | 2026-02-05 05:49:29.951235 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:49:29.951238 | orchestrator | Thursday 05 February 2026 05:48:58 +0000 (0:00:01.129) 1:08:43.363 ***** 2026-02-05 05:49:29.951252 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 05:49:29.951257 | orchestrator | 2026-02-05 05:49:29.951260 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 05:49:29.951269 | orchestrator | Thursday 05 February 2026 05:48:59 +0000 (0:00:01.441) 1:08:44.804 ***** 2026-02-05 05:49:29.951273 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:49:29.951278 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:49:29.951281 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:49:29.951286 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:49:29.951294 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:49:29.951298 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-05 05:49:29.951302 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:49:29.951306 | orchestrator | 2026-02-05 05:49:29.951309 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 05:49:29.951313 | orchestrator | Thursday 05 February 2026 05:49:02 +0000 (0:00:02.165) 1:08:46.969 ***** 2026-02-05 05:49:29.951317 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 05:49:29.951320 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 05:49:29.951324 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 05:49:29.951328 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-05 05:49:29.951331 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 05:49:29.951335 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-05 05:49:29.951339 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 05:49:29.951343 | orchestrator | 2026-02-05 05:49:29.951346 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-05 05:49:29.951350 | orchestrator | Thursday 05 February 2026 05:49:04 +0000 (0:00:02.211) 1:08:49.180 ***** 2026-02-05 05:49:29.951354 | orchestrator | changed: [testbed-node-5] 2026-02-05 05:49:29.951358 | orchestrator | 2026-02-05 05:49:29.951372 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-05 05:49:29.951376 | orchestrator | Thursday 05 February 2026 05:49:06 +0000 (0:00:01.987) 1:08:51.168 ***** 2026-02-05 05:49:29.951380 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:49:29.951384 | orchestrator | 2026-02-05 05:49:29.951388 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-05 05:49:29.951392 | orchestrator | Thursday 05 February 2026 05:49:08 +0000 (0:00:02.552) 1:08:53.721 ***** 2026-02-05 05:49:29.951396 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:49:29.951400 | orchestrator | 2026-02-05 05:49:29.951403 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:49:29.951407 | orchestrator | Thursday 05 February 2026 05:49:10 +0000 (0:00:01.946) 1:08:55.668 ***** 2026-02-05 05:49:29.951411 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-05 05:49:29.951415 | orchestrator | 2026-02-05 05:49:29.951419 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 05:49:29.951423 | orchestrator | Thursday 05 February 2026 05:49:11 +0000 (0:00:01.131) 1:08:56.799 ***** 2026-02-05 05:49:29.951427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-05 05:49:29.951430 | orchestrator | 2026-02-05 05:49:29.951434 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 05:49:29.951438 | orchestrator | Thursday 05 February 2026 05:49:13 +0000 (0:00:01.108) 1:08:57.908 ***** 2026-02-05 05:49:29.951442 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951446 | orchestrator | 2026-02-05 05:49:29.951449 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 05:49:29.951453 | orchestrator | Thursday 05 February 2026 05:49:14 +0000 (0:00:01.093) 1:08:59.002 ***** 2026-02-05 05:49:29.951457 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:49:29.951461 | orchestrator | 2026-02-05 05:49:29.951464 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 05:49:29.951471 | orchestrator | Thursday 05 February 2026 05:49:15 +0000 (0:00:01.527) 1:09:00.529 ***** 2026-02-05 05:49:29.951475 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:49:29.951479 | orchestrator | 2026-02-05 05:49:29.951483 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 05:49:29.951489 | orchestrator | Thursday 05 February 2026 05:49:17 +0000 (0:00:01.496) 1:09:02.025 ***** 2026-02-05 05:49:29.951493 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:49:29.951497 | orchestrator | 2026-02-05 05:49:29.951501 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 05:49:29.951505 | orchestrator | Thursday 05 February 2026 05:49:18 +0000 (0:00:01.524) 1:09:03.550 ***** 2026-02-05 05:49:29.951508 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951512 | orchestrator | 2026-02-05 05:49:29.951516 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 05:49:29.951520 | orchestrator | Thursday 05 February 2026 05:49:19 +0000 (0:00:01.153) 1:09:04.704 ***** 2026-02-05 05:49:29.951524 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951527 | orchestrator | 2026-02-05 05:49:29.951531 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 05:49:29.951535 | orchestrator | Thursday 05 February 2026 05:49:21 +0000 (0:00:01.125) 1:09:05.829 ***** 2026-02-05 05:49:29.951539 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951542 | orchestrator | 2026-02-05 05:49:29.951546 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 05:49:29.951551 | orchestrator | Thursday 05 February 2026 05:49:22 +0000 (0:00:01.086) 1:09:06.916 ***** 2026-02-05 05:49:29.951555 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:49:29.951559 | orchestrator | 2026-02-05 05:49:29.951564 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 05:49:29.951568 | orchestrator | Thursday 05 February 2026 05:49:23 +0000 (0:00:01.558) 1:09:08.475 ***** 2026-02-05 05:49:29.951572 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:49:29.951577 | orchestrator | 2026-02-05 05:49:29.951581 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 05:49:29.951585 | orchestrator | Thursday 05 February 2026 05:49:25 +0000 (0:00:01.565) 1:09:10.041 ***** 2026-02-05 05:49:29.951627 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951631 | orchestrator | 2026-02-05 05:49:29.951636 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:49:29.951640 | orchestrator | Thursday 05 February 2026 05:49:26 +0000 (0:00:00.785) 1:09:10.826 ***** 2026-02-05 05:49:29.951645 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951649 | orchestrator | 2026-02-05 05:49:29.951653 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:49:29.951658 | orchestrator | Thursday 05 February 2026 05:49:26 +0000 (0:00:00.758) 1:09:11.585 ***** 2026-02-05 05:49:29.951662 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:49:29.951666 | orchestrator | 2026-02-05 05:49:29.951671 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:49:29.951675 | orchestrator | Thursday 05 February 2026 05:49:27 +0000 (0:00:00.802) 1:09:12.388 ***** 2026-02-05 05:49:29.951680 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:49:29.951684 | orchestrator | 2026-02-05 05:49:29.951688 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:49:29.951693 | orchestrator | Thursday 05 February 2026 05:49:28 +0000 (0:00:00.814) 1:09:13.202 ***** 2026-02-05 05:49:29.951697 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:49:29.951702 | orchestrator | 2026-02-05 05:49:29.951706 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:49:29.951710 | orchestrator | Thursday 05 February 2026 05:49:29 +0000 (0:00:00.786) 1:09:13.989 ***** 2026-02-05 05:49:29.951715 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:49:29.951719 | orchestrator | 2026-02-05 05:49:29.951726 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:50:09.582341 | orchestrator | Thursday 05 February 2026 05:49:29 +0000 (0:00:00.769) 1:09:14.759 ***** 2026-02-05 05:50:09.582457 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.582475 | orchestrator | 2026-02-05 05:50:09.582489 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:50:09.582501 | orchestrator | Thursday 05 February 2026 05:49:30 +0000 (0:00:00.765) 1:09:15.524 ***** 2026-02-05 05:50:09.582513 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.582524 | orchestrator | 2026-02-05 05:50:09.582535 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:50:09.582547 | orchestrator | Thursday 05 February 2026 05:49:31 +0000 (0:00:00.786) 1:09:16.310 ***** 2026-02-05 05:50:09.582558 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:50:09.582570 | orchestrator | 2026-02-05 05:50:09.582659 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:50:09.582672 | orchestrator | Thursday 05 February 2026 05:49:32 +0000 (0:00:00.771) 1:09:17.082 ***** 2026-02-05 05:50:09.582683 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:50:09.582694 | orchestrator | 2026-02-05 05:50:09.582705 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-05 05:50:09.582716 | orchestrator | Thursday 05 February 2026 05:49:33 +0000 (0:00:00.775) 1:09:17.858 ***** 2026-02-05 05:50:09.582727 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.582738 | orchestrator | 2026-02-05 05:50:09.582749 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-05 05:50:09.582762 | orchestrator | Thursday 05 February 2026 05:49:33 +0000 (0:00:00.771) 1:09:18.629 ***** 2026-02-05 05:50:09.582773 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.582784 | orchestrator | 2026-02-05 05:50:09.582795 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-05 05:50:09.582806 | orchestrator | Thursday 05 February 2026 05:49:34 +0000 (0:00:00.758) 1:09:19.387 ***** 2026-02-05 05:50:09.582817 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.582828 | orchestrator | 2026-02-05 05:50:09.582839 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-05 05:50:09.582850 | orchestrator | Thursday 05 February 2026 05:49:35 +0000 (0:00:00.777) 1:09:20.165 ***** 2026-02-05 05:50:09.582861 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.582874 | orchestrator | 2026-02-05 05:50:09.582888 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-05 05:50:09.582900 | orchestrator | Thursday 05 February 2026 05:49:36 +0000 (0:00:00.749) 1:09:20.915 ***** 2026-02-05 05:50:09.582913 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.582925 | orchestrator | 2026-02-05 05:50:09.582955 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-05 05:50:09.582968 | orchestrator | Thursday 05 February 2026 05:49:36 +0000 (0:00:00.756) 1:09:21.672 ***** 2026-02-05 05:50:09.582981 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.582994 | orchestrator | 2026-02-05 05:50:09.583007 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-05 05:50:09.583020 | orchestrator | Thursday 05 February 2026 05:49:37 +0000 (0:00:00.764) 1:09:22.436 ***** 2026-02-05 05:50:09.583032 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.583045 | orchestrator | 2026-02-05 05:50:09.583058 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-05 05:50:09.583070 | orchestrator | Thursday 05 February 2026 05:49:38 +0000 (0:00:00.769) 1:09:23.206 ***** 2026-02-05 05:50:09.583083 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.583096 | orchestrator | 2026-02-05 05:50:09.583109 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-05 05:50:09.583122 | orchestrator | Thursday 05 February 2026 05:49:39 +0000 (0:00:00.764) 1:09:23.971 ***** 2026-02-05 05:50:09.583134 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.583147 | orchestrator | 2026-02-05 05:50:09.583160 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-05 05:50:09.583194 | orchestrator | Thursday 05 February 2026 05:49:39 +0000 (0:00:00.778) 1:09:24.749 ***** 2026-02-05 05:50:09.583208 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.583220 | orchestrator | 2026-02-05 05:50:09.583233 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-05 05:50:09.583245 | orchestrator | Thursday 05 February 2026 05:49:40 +0000 (0:00:00.801) 1:09:25.551 ***** 2026-02-05 05:50:09.583256 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.583267 | orchestrator | 2026-02-05 05:50:09.583277 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-05 05:50:09.583288 | orchestrator | Thursday 05 February 2026 05:49:41 +0000 (0:00:00.745) 1:09:26.297 ***** 2026-02-05 05:50:09.583299 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.583310 | orchestrator | 2026-02-05 05:50:09.583320 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 05:50:09.583331 | orchestrator | Thursday 05 February 2026 05:49:42 +0000 (0:00:00.827) 1:09:27.124 ***** 2026-02-05 05:50:09.583342 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:50:09.583353 | orchestrator | 2026-02-05 05:50:09.583363 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 05:50:09.583374 | orchestrator | Thursday 05 February 2026 05:49:43 +0000 (0:00:01.518) 1:09:28.643 ***** 2026-02-05 05:50:09.583385 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:50:09.583396 | orchestrator | 2026-02-05 05:50:09.583406 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 05:50:09.583417 | orchestrator | Thursday 05 February 2026 05:49:45 +0000 (0:00:01.909) 1:09:30.552 ***** 2026-02-05 05:50:09.583428 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-05 05:50:09.583440 | orchestrator | 2026-02-05 05:50:09.583451 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 05:50:09.583462 | orchestrator | Thursday 05 February 2026 05:49:46 +0000 (0:00:01.091) 1:09:31.644 ***** 2026-02-05 05:50:09.583473 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.583483 | orchestrator | 2026-02-05 05:50:09.583495 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 05:50:09.583525 | orchestrator | Thursday 05 February 2026 05:49:47 +0000 (0:00:01.129) 1:09:32.774 ***** 2026-02-05 05:50:09.583536 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.583547 | orchestrator | 2026-02-05 05:50:09.583558 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 05:50:09.583569 | orchestrator | Thursday 05 February 2026 05:49:49 +0000 (0:00:01.170) 1:09:33.944 ***** 2026-02-05 05:50:09.583612 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 05:50:09.583631 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 05:50:09.583651 | orchestrator | 2026-02-05 05:50:09.583670 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 05:50:09.583688 | orchestrator | Thursday 05 February 2026 05:49:51 +0000 (0:00:01.889) 1:09:35.834 ***** 2026-02-05 05:50:09.583701 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:50:09.583712 | orchestrator | 2026-02-05 05:50:09.583723 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 05:50:09.583734 | orchestrator | Thursday 05 February 2026 05:49:52 +0000 (0:00:01.468) 1:09:37.302 ***** 2026-02-05 05:50:09.583745 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.583756 | orchestrator | 2026-02-05 05:50:09.583766 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 05:50:09.583777 | orchestrator | Thursday 05 February 2026 05:49:53 +0000 (0:00:01.119) 1:09:38.422 ***** 2026-02-05 05:50:09.583788 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.583799 | orchestrator | 2026-02-05 05:50:09.583810 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 05:50:09.583820 | orchestrator | Thursday 05 February 2026 05:49:54 +0000 (0:00:00.769) 1:09:39.191 ***** 2026-02-05 05:50:09.583841 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.583852 | orchestrator | 2026-02-05 05:50:09.583863 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 05:50:09.583874 | orchestrator | Thursday 05 February 2026 05:49:55 +0000 (0:00:00.764) 1:09:39.956 ***** 2026-02-05 05:50:09.583885 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-05 05:50:09.583896 | orchestrator | 2026-02-05 05:50:09.583907 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 05:50:09.583918 | orchestrator | Thursday 05 February 2026 05:49:56 +0000 (0:00:01.124) 1:09:41.080 ***** 2026-02-05 05:50:09.583929 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:50:09.583940 | orchestrator | 2026-02-05 05:50:09.583957 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 05:50:09.583969 | orchestrator | Thursday 05 February 2026 05:49:57 +0000 (0:00:01.666) 1:09:42.747 ***** 2026-02-05 05:50:09.583980 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 05:50:09.583991 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 05:50:09.584002 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 05:50:09.584012 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.584023 | orchestrator | 2026-02-05 05:50:09.584034 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 05:50:09.584045 | orchestrator | Thursday 05 February 2026 05:49:59 +0000 (0:00:01.131) 1:09:43.879 ***** 2026-02-05 05:50:09.584056 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.584067 | orchestrator | 2026-02-05 05:50:09.584085 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 05:50:09.584104 | orchestrator | Thursday 05 February 2026 05:50:00 +0000 (0:00:01.126) 1:09:45.006 ***** 2026-02-05 05:50:09.584123 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.584142 | orchestrator | 2026-02-05 05:50:09.584154 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 05:50:09.584165 | orchestrator | Thursday 05 February 2026 05:50:01 +0000 (0:00:01.176) 1:09:46.182 ***** 2026-02-05 05:50:09.584176 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.584187 | orchestrator | 2026-02-05 05:50:09.584197 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 05:50:09.584208 | orchestrator | Thursday 05 February 2026 05:50:02 +0000 (0:00:01.129) 1:09:47.312 ***** 2026-02-05 05:50:09.584219 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.584230 | orchestrator | 2026-02-05 05:50:09.584241 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 05:50:09.584252 | orchestrator | Thursday 05 February 2026 05:50:03 +0000 (0:00:01.120) 1:09:48.432 ***** 2026-02-05 05:50:09.584262 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.584273 | orchestrator | 2026-02-05 05:50:09.584284 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 05:50:09.584295 | orchestrator | Thursday 05 February 2026 05:50:04 +0000 (0:00:00.783) 1:09:49.216 ***** 2026-02-05 05:50:09.584306 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:50:09.584317 | orchestrator | 2026-02-05 05:50:09.584328 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 05:50:09.584339 | orchestrator | Thursday 05 February 2026 05:50:06 +0000 (0:00:02.127) 1:09:51.343 ***** 2026-02-05 05:50:09.584350 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:50:09.584361 | orchestrator | 2026-02-05 05:50:09.584371 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 05:50:09.584382 | orchestrator | Thursday 05 February 2026 05:50:07 +0000 (0:00:00.780) 1:09:52.124 ***** 2026-02-05 05:50:09.584393 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-05 05:50:09.584412 | orchestrator | 2026-02-05 05:50:09.584423 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 05:50:09.584433 | orchestrator | Thursday 05 February 2026 05:50:08 +0000 (0:00:01.141) 1:09:53.266 ***** 2026-02-05 05:50:09.584444 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:09.584455 | orchestrator | 2026-02-05 05:50:09.584466 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 05:50:09.584485 | orchestrator | Thursday 05 February 2026 05:50:09 +0000 (0:00:01.123) 1:09:54.390 ***** 2026-02-05 05:50:51.113075 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.113178 | orchestrator | 2026-02-05 05:50:51.113192 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 05:50:51.113202 | orchestrator | Thursday 05 February 2026 05:50:10 +0000 (0:00:01.196) 1:09:55.586 ***** 2026-02-05 05:50:51.113210 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.113218 | orchestrator | 2026-02-05 05:50:51.113225 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 05:50:51.113233 | orchestrator | Thursday 05 February 2026 05:50:11 +0000 (0:00:01.120) 1:09:56.707 ***** 2026-02-05 05:50:51.113242 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.113250 | orchestrator | 2026-02-05 05:50:51.113258 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 05:50:51.113267 | orchestrator | Thursday 05 February 2026 05:50:13 +0000 (0:00:01.136) 1:09:57.843 ***** 2026-02-05 05:50:51.113275 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.113283 | orchestrator | 2026-02-05 05:50:51.113292 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 05:50:51.113300 | orchestrator | Thursday 05 February 2026 05:50:14 +0000 (0:00:01.130) 1:09:58.974 ***** 2026-02-05 05:50:51.113308 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.113316 | orchestrator | 2026-02-05 05:50:51.113324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 05:50:51.113332 | orchestrator | Thursday 05 February 2026 05:50:15 +0000 (0:00:01.132) 1:10:00.106 ***** 2026-02-05 05:50:51.113340 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.113348 | orchestrator | 2026-02-05 05:50:51.113356 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 05:50:51.113364 | orchestrator | Thursday 05 February 2026 05:50:16 +0000 (0:00:01.139) 1:10:01.246 ***** 2026-02-05 05:50:51.113372 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.113380 | orchestrator | 2026-02-05 05:50:51.113397 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 05:50:51.113404 | orchestrator | Thursday 05 February 2026 05:50:17 +0000 (0:00:01.110) 1:10:02.356 ***** 2026-02-05 05:50:51.113412 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:50:51.113420 | orchestrator | 2026-02-05 05:50:51.113427 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 05:50:51.113435 | orchestrator | Thursday 05 February 2026 05:50:18 +0000 (0:00:00.779) 1:10:03.135 ***** 2026-02-05 05:50:51.113459 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-05 05:50:51.113469 | orchestrator | 2026-02-05 05:50:51.113476 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 05:50:51.113483 | orchestrator | Thursday 05 February 2026 05:50:19 +0000 (0:00:01.087) 1:10:04.223 ***** 2026-02-05 05:50:51.113491 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-05 05:50:51.113499 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-05 05:50:51.113507 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-05 05:50:51.113514 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-05 05:50:51.113522 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-05 05:50:51.113528 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-05 05:50:51.113535 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-05 05:50:51.113586 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-05 05:50:51.113595 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 05:50:51.113602 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 05:50:51.113610 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 05:50:51.113616 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 05:50:51.113623 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 05:50:51.113631 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 05:50:51.113638 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-05 05:50:51.113646 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-05 05:50:51.113652 | orchestrator | 2026-02-05 05:50:51.113659 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 05:50:51.113667 | orchestrator | Thursday 05 February 2026 05:50:25 +0000 (0:00:06.344) 1:10:10.567 ***** 2026-02-05 05:50:51.113674 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-05 05:50:51.113682 | orchestrator | 2026-02-05 05:50:51.113689 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-05 05:50:51.113697 | orchestrator | Thursday 05 February 2026 05:50:26 +0000 (0:00:01.103) 1:10:11.670 ***** 2026-02-05 05:50:51.113705 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:50:51.113714 | orchestrator | 2026-02-05 05:50:51.113723 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-05 05:50:51.113732 | orchestrator | Thursday 05 February 2026 05:50:28 +0000 (0:00:01.603) 1:10:13.274 ***** 2026-02-05 05:50:51.113740 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:50:51.113749 | orchestrator | 2026-02-05 05:50:51.113758 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 05:50:51.113766 | orchestrator | Thursday 05 February 2026 05:50:30 +0000 (0:00:01.694) 1:10:14.969 ***** 2026-02-05 05:50:51.113775 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.113783 | orchestrator | 2026-02-05 05:50:51.113791 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 05:50:51.113883 | orchestrator | Thursday 05 February 2026 05:50:30 +0000 (0:00:00.775) 1:10:15.744 ***** 2026-02-05 05:50:51.113893 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.113901 | orchestrator | 2026-02-05 05:50:51.113909 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 05:50:51.113917 | orchestrator | Thursday 05 February 2026 05:50:31 +0000 (0:00:00.828) 1:10:16.572 ***** 2026-02-05 05:50:51.113924 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.113931 | orchestrator | 2026-02-05 05:50:51.113939 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 05:50:51.113945 | orchestrator | Thursday 05 February 2026 05:50:32 +0000 (0:00:00.801) 1:10:17.374 ***** 2026-02-05 05:50:51.113952 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.113960 | orchestrator | 2026-02-05 05:50:51.113968 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 05:50:51.113976 | orchestrator | Thursday 05 February 2026 05:50:33 +0000 (0:00:00.788) 1:10:18.163 ***** 2026-02-05 05:50:51.113984 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.113992 | orchestrator | 2026-02-05 05:50:51.114000 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 05:50:51.114008 | orchestrator | Thursday 05 February 2026 05:50:34 +0000 (0:00:00.791) 1:10:18.954 ***** 2026-02-05 05:50:51.114064 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.114072 | orchestrator | 2026-02-05 05:50:51.114081 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 05:50:51.114099 | orchestrator | Thursday 05 February 2026 05:50:34 +0000 (0:00:00.762) 1:10:19.717 ***** 2026-02-05 05:50:51.114107 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.114115 | orchestrator | 2026-02-05 05:50:51.114123 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 05:50:51.114131 | orchestrator | Thursday 05 February 2026 05:50:35 +0000 (0:00:00.801) 1:10:20.519 ***** 2026-02-05 05:50:51.114138 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.114146 | orchestrator | 2026-02-05 05:50:51.114154 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 05:50:51.114162 | orchestrator | Thursday 05 February 2026 05:50:36 +0000 (0:00:00.765) 1:10:21.284 ***** 2026-02-05 05:50:51.114170 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.114178 | orchestrator | 2026-02-05 05:50:51.114192 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 05:50:51.114207 | orchestrator | Thursday 05 February 2026 05:50:37 +0000 (0:00:00.760) 1:10:22.045 ***** 2026-02-05 05:50:51.114215 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.114223 | orchestrator | 2026-02-05 05:50:51.114231 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 05:50:51.114238 | orchestrator | Thursday 05 February 2026 05:50:38 +0000 (0:00:00.782) 1:10:22.828 ***** 2026-02-05 05:50:51.114246 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.114254 | orchestrator | 2026-02-05 05:50:51.114262 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 05:50:51.114270 | orchestrator | Thursday 05 February 2026 05:50:38 +0000 (0:00:00.774) 1:10:23.603 ***** 2026-02-05 05:50:51.114278 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-05 05:50:51.114285 | orchestrator | 2026-02-05 05:50:51.114293 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 05:50:51.114301 | orchestrator | Thursday 05 February 2026 05:50:42 +0000 (0:00:04.162) 1:10:27.765 ***** 2026-02-05 05:50:51.114309 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:50:51.114317 | orchestrator | 2026-02-05 05:50:51.114325 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 05:50:51.114333 | orchestrator | Thursday 05 February 2026 05:50:43 +0000 (0:00:00.841) 1:10:28.607 ***** 2026-02-05 05:50:51.114343 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-05 05:50:51.114354 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-05 05:50:51.114363 | orchestrator | 2026-02-05 05:50:51.114371 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 05:50:51.114379 | orchestrator | Thursday 05 February 2026 05:50:48 +0000 (0:00:04.988) 1:10:33.595 ***** 2026-02-05 05:50:51.114387 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.114395 | orchestrator | 2026-02-05 05:50:51.114403 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 05:50:51.114411 | orchestrator | Thursday 05 February 2026 05:50:49 +0000 (0:00:00.781) 1:10:34.377 ***** 2026-02-05 05:50:51.114418 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.114426 | orchestrator | 2026-02-05 05:50:51.114434 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 05:50:51.114448 | orchestrator | Thursday 05 February 2026 05:50:50 +0000 (0:00:00.752) 1:10:35.129 ***** 2026-02-05 05:50:51.114456 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:50:51.114464 | orchestrator | 2026-02-05 05:50:51.114472 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 05:50:51.114486 | orchestrator | Thursday 05 February 2026 05:50:51 +0000 (0:00:00.793) 1:10:35.923 ***** 2026-02-05 05:52:00.367462 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:00.367650 | orchestrator | 2026-02-05 05:52:00.367673 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 05:52:00.367688 | orchestrator | Thursday 05 February 2026 05:50:51 +0000 (0:00:00.780) 1:10:36.703 ***** 2026-02-05 05:52:00.367700 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:00.367711 | orchestrator | 2026-02-05 05:52:00.367723 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 05:52:00.367734 | orchestrator | Thursday 05 February 2026 05:50:52 +0000 (0:00:00.787) 1:10:37.491 ***** 2026-02-05 05:52:00.367745 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:00.367757 | orchestrator | 2026-02-05 05:52:00.367769 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 05:52:00.367780 | orchestrator | Thursday 05 February 2026 05:50:53 +0000 (0:00:00.887) 1:10:38.378 ***** 2026-02-05 05:52:00.367791 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:52:00.367802 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:52:00.367813 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:52:00.367824 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:00.367835 | orchestrator | 2026-02-05 05:52:00.367846 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 05:52:00.367857 | orchestrator | Thursday 05 February 2026 05:50:54 +0000 (0:00:01.049) 1:10:39.428 ***** 2026-02-05 05:52:00.367868 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:52:00.367879 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:52:00.367890 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:52:00.367901 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:00.367912 | orchestrator | 2026-02-05 05:52:00.367923 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 05:52:00.367934 | orchestrator | Thursday 05 February 2026 05:50:55 +0000 (0:00:01.053) 1:10:40.481 ***** 2026-02-05 05:52:00.367945 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 05:52:00.367956 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 05:52:00.367967 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 05:52:00.367978 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:00.367989 | orchestrator | 2026-02-05 05:52:00.368018 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 05:52:00.368030 | orchestrator | Thursday 05 February 2026 05:50:56 +0000 (0:00:01.062) 1:10:41.544 ***** 2026-02-05 05:52:00.368041 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:00.368052 | orchestrator | 2026-02-05 05:52:00.368063 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 05:52:00.368074 | orchestrator | Thursday 05 February 2026 05:50:57 +0000 (0:00:00.794) 1:10:42.339 ***** 2026-02-05 05:52:00.368085 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 05:52:00.368096 | orchestrator | 2026-02-05 05:52:00.368107 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 05:52:00.368118 | orchestrator | Thursday 05 February 2026 05:50:58 +0000 (0:00:00.976) 1:10:43.315 ***** 2026-02-05 05:52:00.368129 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:00.368140 | orchestrator | 2026-02-05 05:52:00.368151 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-05 05:52:00.368162 | orchestrator | Thursday 05 February 2026 05:51:00 +0000 (0:00:01.885) 1:10:45.200 ***** 2026-02-05 05:52:00.368208 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-02-05 05:52:00.368220 | orchestrator | 2026-02-05 05:52:00.368231 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-05 05:52:00.368242 | orchestrator | Thursday 05 February 2026 05:51:01 +0000 (0:00:01.110) 1:10:46.311 ***** 2026-02-05 05:52:00.368253 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:52:00.368264 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 05:52:00.368276 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 05:52:00.368287 | orchestrator | 2026-02-05 05:52:00.368298 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:52:00.368309 | orchestrator | Thursday 05 February 2026 05:51:04 +0000 (0:00:03.335) 1:10:49.646 ***** 2026-02-05 05:52:00.368319 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-05 05:52:00.368331 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 05:52:00.368342 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:00.368353 | orchestrator | 2026-02-05 05:52:00.368364 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-05 05:52:00.368374 | orchestrator | Thursday 05 February 2026 05:51:06 +0000 (0:00:01.942) 1:10:51.589 ***** 2026-02-05 05:52:00.368385 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:00.368396 | orchestrator | 2026-02-05 05:52:00.368408 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-05 05:52:00.368419 | orchestrator | Thursday 05 February 2026 05:51:07 +0000 (0:00:00.763) 1:10:52.352 ***** 2026-02-05 05:52:00.368430 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-02-05 05:52:00.368442 | orchestrator | 2026-02-05 05:52:00.368453 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-05 05:52:00.368464 | orchestrator | Thursday 05 February 2026 05:51:08 +0000 (0:00:01.113) 1:10:53.466 ***** 2026-02-05 05:52:00.368476 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:52:00.368488 | orchestrator | 2026-02-05 05:52:00.368499 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-05 05:52:00.368510 | orchestrator | Thursday 05 February 2026 05:51:10 +0000 (0:00:01.597) 1:10:55.064 ***** 2026-02-05 05:52:00.368566 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:52:00.368581 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-05 05:52:00.368592 | orchestrator | 2026-02-05 05:52:00.368604 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-05 05:52:00.368615 | orchestrator | Thursday 05 February 2026 05:51:15 +0000 (0:00:05.536) 1:11:00.601 ***** 2026-02-05 05:52:00.368626 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 05:52:00.368637 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 05:52:00.368648 | orchestrator | 2026-02-05 05:52:00.368659 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-05 05:52:00.368670 | orchestrator | Thursday 05 February 2026 05:51:19 +0000 (0:00:03.330) 1:11:03.931 ***** 2026-02-05 05:52:00.368681 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-05 05:52:00.368692 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:00.368703 | orchestrator | 2026-02-05 05:52:00.368714 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-05 05:52:00.368725 | orchestrator | Thursday 05 February 2026 05:51:20 +0000 (0:00:01.600) 1:11:05.532 ***** 2026-02-05 05:52:00.368736 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-02-05 05:52:00.368748 | orchestrator | 2026-02-05 05:52:00.368759 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-05 05:52:00.368778 | orchestrator | Thursday 05 February 2026 05:51:21 +0000 (0:00:01.249) 1:11:06.781 ***** 2026-02-05 05:52:00.368790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:52:00.368801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:52:00.368812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:52:00.368829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:52:00.368841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:52:00.368852 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:00.368863 | orchestrator | 2026-02-05 05:52:00.368874 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-05 05:52:00.368885 | orchestrator | Thursday 05 February 2026 05:51:23 +0000 (0:00:01.602) 1:11:08.384 ***** 2026-02-05 05:52:00.368897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:52:00.368908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:52:00.368919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:52:00.368930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:52:00.368941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 05:52:00.368952 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:00.368963 | orchestrator | 2026-02-05 05:52:00.368974 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-05 05:52:00.368985 | orchestrator | Thursday 05 February 2026 05:51:25 +0000 (0:00:01.566) 1:11:09.950 ***** 2026-02-05 05:52:00.368997 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:52:00.369008 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:52:00.369019 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:52:00.369031 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:52:00.369043 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 05:52:00.369054 | orchestrator | 2026-02-05 05:52:00.369065 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-05 05:52:00.369077 | orchestrator | Thursday 05 February 2026 05:51:59 +0000 (0:00:34.457) 1:11:44.408 ***** 2026-02-05 05:52:00.369088 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:00.369099 | orchestrator | 2026-02-05 05:52:00.369110 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-05 05:52:00.369128 | orchestrator | Thursday 05 February 2026 05:52:00 +0000 (0:00:00.763) 1:11:45.172 ***** 2026-02-05 05:52:52.134271 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:52.134440 | orchestrator | 2026-02-05 05:52:52.134469 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-05 05:52:52.134488 | orchestrator | Thursday 05 February 2026 05:52:01 +0000 (0:00:00.776) 1:11:45.948 ***** 2026-02-05 05:52:52.134504 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-02-05 05:52:52.134577 | orchestrator | 2026-02-05 05:52:52.134593 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-05 05:52:52.134607 | orchestrator | Thursday 05 February 2026 05:52:02 +0000 (0:00:01.101) 1:11:47.049 ***** 2026-02-05 05:52:52.134619 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-02-05 05:52:52.134636 | orchestrator | 2026-02-05 05:52:52.134651 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-05 05:52:52.134668 | orchestrator | Thursday 05 February 2026 05:52:03 +0000 (0:00:01.142) 1:11:48.193 ***** 2026-02-05 05:52:52.134684 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:52.134700 | orchestrator | 2026-02-05 05:52:52.134715 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-05 05:52:52.134732 | orchestrator | Thursday 05 February 2026 05:52:05 +0000 (0:00:02.092) 1:11:50.286 ***** 2026-02-05 05:52:52.134748 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:52.134767 | orchestrator | 2026-02-05 05:52:52.134785 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-05 05:52:52.134803 | orchestrator | Thursday 05 February 2026 05:52:07 +0000 (0:00:01.904) 1:11:52.190 ***** 2026-02-05 05:52:52.134818 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:52.134829 | orchestrator | 2026-02-05 05:52:52.134841 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-05 05:52:52.134853 | orchestrator | Thursday 05 February 2026 05:52:09 +0000 (0:00:02.184) 1:11:54.374 ***** 2026-02-05 05:52:52.134865 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 05:52:52.134878 | orchestrator | 2026-02-05 05:52:52.134890 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-02-05 05:52:52.134903 | orchestrator | skipping: no hosts matched 2026-02-05 05:52:52.134915 | orchestrator | 2026-02-05 05:52:52.134942 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-02-05 05:52:52.134954 | orchestrator | skipping: no hosts matched 2026-02-05 05:52:52.134966 | orchestrator | 2026-02-05 05:52:52.134977 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-02-05 05:52:52.134988 | orchestrator | skipping: no hosts matched 2026-02-05 05:52:52.134997 | orchestrator | 2026-02-05 05:52:52.135007 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-02-05 05:52:52.135017 | orchestrator | 2026-02-05 05:52:52.135025 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-02-05 05:52:52.135035 | orchestrator | Thursday 05 February 2026 05:52:13 +0000 (0:00:04.174) 1:11:58.549 ***** 2026-02-05 05:52:52.135044 | orchestrator | changed: [testbed-node-0] 2026-02-05 05:52:52.135053 | orchestrator | changed: [testbed-node-1] 2026-02-05 05:52:52.135062 | orchestrator | changed: [testbed-node-2] 2026-02-05 05:52:52.135071 | orchestrator | changed: [testbed-node-3] 2026-02-05 05:52:52.135080 | orchestrator | changed: [testbed-node-4] 2026-02-05 05:52:52.135090 | orchestrator | changed: [testbed-node-5] 2026-02-05 05:52:52.135099 | orchestrator | 2026-02-05 05:52:52.135108 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-02-05 05:52:52.135118 | orchestrator | Thursday 05 February 2026 05:52:16 +0000 (0:00:02.621) 1:12:01.171 ***** 2026-02-05 05:52:52.135128 | orchestrator | changed: [testbed-node-4] 2026-02-05 05:52:52.135137 | orchestrator | changed: [testbed-node-3] 2026-02-05 05:52:52.135146 | orchestrator | changed: [testbed-node-1] 2026-02-05 05:52:52.135155 | orchestrator | changed: [testbed-node-0] 2026-02-05 05:52:52.135163 | orchestrator | changed: [testbed-node-2] 2026-02-05 05:52:52.135171 | orchestrator | changed: [testbed-node-5] 2026-02-05 05:52:52.135188 | orchestrator | 2026-02-05 05:52:52.135196 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:52:52.135204 | orchestrator | Thursday 05 February 2026 05:52:19 +0000 (0:00:03.436) 1:12:04.607 ***** 2026-02-05 05:52:52.135212 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:52:52.135220 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:52:52.135228 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:52:52.135236 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:52:52.135244 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:52:52.135252 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:52.135260 | orchestrator | 2026-02-05 05:52:52.135268 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:52:52.135276 | orchestrator | Thursday 05 February 2026 05:52:22 +0000 (0:00:02.325) 1:12:06.933 ***** 2026-02-05 05:52:52.135284 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:52:52.135292 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:52:52.135300 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:52:52.135307 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:52:52.135315 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:52:52.135323 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:52.135331 | orchestrator | 2026-02-05 05:52:52.135339 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 05:52:52.135347 | orchestrator | Thursday 05 February 2026 05:52:24 +0000 (0:00:01.915) 1:12:08.849 ***** 2026-02-05 05:52:52.135357 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 05:52:52.135370 | orchestrator | 2026-02-05 05:52:52.135378 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 05:52:52.135386 | orchestrator | Thursday 05 February 2026 05:52:26 +0000 (0:00:02.201) 1:12:11.050 ***** 2026-02-05 05:52:52.135395 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 05:52:52.135403 | orchestrator | 2026-02-05 05:52:52.135428 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 05:52:52.135437 | orchestrator | Thursday 05 February 2026 05:52:28 +0000 (0:00:02.065) 1:12:13.115 ***** 2026-02-05 05:52:52.135445 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:52:52.135453 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:52:52.135461 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:52:52.135469 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:52:52.135477 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:52:52.135485 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:52.135493 | orchestrator | 2026-02-05 05:52:52.135501 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 05:52:52.135509 | orchestrator | Thursday 05 February 2026 05:52:30 +0000 (0:00:02.593) 1:12:15.709 ***** 2026-02-05 05:52:52.135539 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:52:52.135548 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:52:52.135556 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:52:52.135564 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:52:52.135572 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:52:52.135580 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:52.135588 | orchestrator | 2026-02-05 05:52:52.135596 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 05:52:52.135604 | orchestrator | Thursday 05 February 2026 05:52:33 +0000 (0:00:02.134) 1:12:17.844 ***** 2026-02-05 05:52:52.135612 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:52:52.135620 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:52:52.135628 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:52:52.135636 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:52:52.135644 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:52:52.135652 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:52.135669 | orchestrator | 2026-02-05 05:52:52.135682 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 05:52:52.135695 | orchestrator | Thursday 05 February 2026 05:52:35 +0000 (0:00:02.472) 1:12:20.317 ***** 2026-02-05 05:52:52.135709 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:52:52.135722 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:52:52.135735 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:52:52.135747 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:52:52.135759 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:52:52.135772 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:52.135785 | orchestrator | 2026-02-05 05:52:52.135799 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 05:52:52.135812 | orchestrator | Thursday 05 February 2026 05:52:37 +0000 (0:00:02.250) 1:12:22.568 ***** 2026-02-05 05:52:52.135831 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:52:52.135844 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:52:52.135856 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:52:52.135868 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:52:52.135880 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:52.135891 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:52:52.135904 | orchestrator | 2026-02-05 05:52:52.135917 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 05:52:52.135931 | orchestrator | Thursday 05 February 2026 05:52:39 +0000 (0:00:02.242) 1:12:24.810 ***** 2026-02-05 05:52:52.135944 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:52:52.135959 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:52:52.135972 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:52:52.135987 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:52:52.135995 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:52:52.136003 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:52.136011 | orchestrator | 2026-02-05 05:52:52.136019 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 05:52:52.136027 | orchestrator | Thursday 05 February 2026 05:52:41 +0000 (0:00:01.968) 1:12:26.779 ***** 2026-02-05 05:52:52.136035 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:52:52.136043 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:52:52.136050 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:52:52.136058 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:52:52.136066 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:52:52.136074 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:52.136082 | orchestrator | 2026-02-05 05:52:52.136090 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 05:52:52.136097 | orchestrator | Thursday 05 February 2026 05:52:43 +0000 (0:00:01.785) 1:12:28.564 ***** 2026-02-05 05:52:52.136105 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:52:52.136113 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:52:52.136121 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:52:52.136129 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:52:52.136137 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:52:52.136145 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:52.136152 | orchestrator | 2026-02-05 05:52:52.136160 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 05:52:52.136168 | orchestrator | Thursday 05 February 2026 05:52:46 +0000 (0:00:02.445) 1:12:31.010 ***** 2026-02-05 05:52:52.136176 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:52:52.136184 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:52:52.136192 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:52:52.136199 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:52:52.136207 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:52:52.136215 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:52:52.136223 | orchestrator | 2026-02-05 05:52:52.136231 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 05:52:52.136239 | orchestrator | Thursday 05 February 2026 05:52:48 +0000 (0:00:02.078) 1:12:33.089 ***** 2026-02-05 05:52:52.136247 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:52:52.136262 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:52:52.136270 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:52:52.136278 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:52:52.136286 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:52:52.136294 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:52.136302 | orchestrator | 2026-02-05 05:52:52.136310 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 05:52:52.136318 | orchestrator | Thursday 05 February 2026 05:52:50 +0000 (0:00:02.045) 1:12:35.135 ***** 2026-02-05 05:52:52.136326 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:52:52.136334 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:52:52.136342 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:52:52.136350 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:52:52.136358 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:52:52.136366 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:52:52.136374 | orchestrator | 2026-02-05 05:52:52.136391 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 05:53:48.718606 | orchestrator | Thursday 05 February 2026 05:52:52 +0000 (0:00:01.790) 1:12:36.926 ***** 2026-02-05 05:53:48.718717 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:53:48.718730 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:53:48.718737 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:53:48.718745 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:53:48.718754 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:53:48.718762 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:53:48.718769 | orchestrator | 2026-02-05 05:53:48.718777 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 05:53:48.718785 | orchestrator | Thursday 05 February 2026 05:52:54 +0000 (0:00:02.144) 1:12:39.071 ***** 2026-02-05 05:53:48.718792 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:53:48.718799 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:53:48.718808 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:53:48.718815 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:53:48.718823 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:53:48.718831 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:53:48.718839 | orchestrator | 2026-02-05 05:53:48.718847 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 05:53:48.718855 | orchestrator | Thursday 05 February 2026 05:52:56 +0000 (0:00:01.795) 1:12:40.866 ***** 2026-02-05 05:53:48.718863 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:53:48.718870 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:53:48.718877 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:53:48.718884 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:53:48.718892 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:53:48.718899 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:53:48.718906 | orchestrator | 2026-02-05 05:53:48.718912 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 05:53:48.718920 | orchestrator | Thursday 05 February 2026 05:52:58 +0000 (0:00:01.989) 1:12:42.856 ***** 2026-02-05 05:53:48.718926 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:53:48.718933 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:53:48.718941 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:53:48.718949 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:53:48.718955 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:53:48.718962 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:53:48.718969 | orchestrator | 2026-02-05 05:53:48.718976 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 05:53:48.718999 | orchestrator | Thursday 05 February 2026 05:52:59 +0000 (0:00:01.740) 1:12:44.597 ***** 2026-02-05 05:53:48.719007 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:53:48.719015 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:53:48.719022 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:53:48.719029 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:53:48.719036 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:53:48.719064 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:53:48.719071 | orchestrator | 2026-02-05 05:53:48.719079 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 05:53:48.719086 | orchestrator | Thursday 05 February 2026 05:53:01 +0000 (0:00:01.759) 1:12:46.356 ***** 2026-02-05 05:53:48.719093 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:53:48.719099 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:53:48.719106 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:53:48.719114 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:53:48.719122 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:53:48.719130 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:53:48.719137 | orchestrator | 2026-02-05 05:53:48.719145 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 05:53:48.719152 | orchestrator | Thursday 05 February 2026 05:53:03 +0000 (0:00:02.089) 1:12:48.446 ***** 2026-02-05 05:53:48.719160 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:53:48.719168 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:53:48.719176 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:53:48.719183 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:53:48.719191 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:53:48.719199 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:53:48.719207 | orchestrator | 2026-02-05 05:53:48.719214 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 05:53:48.719222 | orchestrator | Thursday 05 February 2026 05:53:05 +0000 (0:00:02.187) 1:12:50.633 ***** 2026-02-05 05:53:48.719230 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:53:48.719238 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:53:48.719245 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:53:48.719253 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:53:48.719261 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:53:48.719269 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:53:48.719276 | orchestrator | 2026-02-05 05:53:48.719284 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-05 05:53:48.719292 | orchestrator | Thursday 05 February 2026 05:53:07 +0000 (0:00:02.157) 1:12:52.791 ***** 2026-02-05 05:53:48.719300 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:53:48.719307 | orchestrator | 2026-02-05 05:53:48.719315 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-05 05:53:48.719323 | orchestrator | Thursday 05 February 2026 05:53:11 +0000 (0:00:03.448) 1:12:56.240 ***** 2026-02-05 05:53:48.719330 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:53:48.719338 | orchestrator | 2026-02-05 05:53:48.719346 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-05 05:53:48.719353 | orchestrator | Thursday 05 February 2026 05:53:14 +0000 (0:00:03.191) 1:12:59.431 ***** 2026-02-05 05:53:48.719361 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:53:48.719368 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:53:48.719376 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:53:48.719383 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:53:48.719391 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:53:48.719399 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:53:48.719406 | orchestrator | 2026-02-05 05:53:48.719414 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-05 05:53:48.719422 | orchestrator | Thursday 05 February 2026 05:53:17 +0000 (0:00:02.970) 1:13:02.402 ***** 2026-02-05 05:53:48.719430 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:53:48.719437 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:53:48.719445 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:53:48.719452 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:53:48.719460 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:53:48.719467 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:53:48.719474 | orchestrator | 2026-02-05 05:53:48.719482 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-05 05:53:48.719531 | orchestrator | Thursday 05 February 2026 05:53:19 +0000 (0:00:02.080) 1:13:04.483 ***** 2026-02-05 05:53:48.719542 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 05:53:48.719556 | orchestrator | 2026-02-05 05:53:48.719564 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-05 05:53:48.719571 | orchestrator | Thursday 05 February 2026 05:53:22 +0000 (0:00:02.536) 1:13:07.020 ***** 2026-02-05 05:53:48.719578 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:53:48.719586 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:53:48.719593 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:53:48.719599 | orchestrator | ok: [testbed-node-3] 2026-02-05 05:53:48.719606 | orchestrator | ok: [testbed-node-5] 2026-02-05 05:53:48.719614 | orchestrator | ok: [testbed-node-4] 2026-02-05 05:53:48.719621 | orchestrator | 2026-02-05 05:53:48.719628 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-05 05:53:48.719636 | orchestrator | Thursday 05 February 2026 05:53:26 +0000 (0:00:03.893) 1:13:10.913 ***** 2026-02-05 05:53:48.719643 | orchestrator | changed: [testbed-node-3] 2026-02-05 05:53:48.719650 | orchestrator | changed: [testbed-node-4] 2026-02-05 05:53:48.719658 | orchestrator | changed: [testbed-node-5] 2026-02-05 05:53:48.719665 | orchestrator | changed: [testbed-node-2] 2026-02-05 05:53:48.719672 | orchestrator | changed: [testbed-node-1] 2026-02-05 05:53:48.719679 | orchestrator | changed: [testbed-node-0] 2026-02-05 05:53:48.719687 | orchestrator | 2026-02-05 05:53:48.719694 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-02-05 05:53:48.719701 | orchestrator | 2026-02-05 05:53:48.719709 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:53:48.719716 | orchestrator | Thursday 05 February 2026 05:53:30 +0000 (0:00:04.616) 1:13:15.530 ***** 2026-02-05 05:53:48.719723 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:53:48.719730 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:53:48.719737 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:53:48.719744 | orchestrator | 2026-02-05 05:53:48.719751 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:53:48.719759 | orchestrator | Thursday 05 February 2026 05:53:32 +0000 (0:00:01.692) 1:13:17.223 ***** 2026-02-05 05:53:48.719765 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:53:48.719773 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:53:48.719785 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:53:48.719793 | orchestrator | 2026-02-05 05:53:48.719800 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-05 05:53:48.719809 | orchestrator | Thursday 05 February 2026 05:53:34 +0000 (0:00:01.603) 1:13:18.826 ***** 2026-02-05 05:53:48.719816 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:53:48.719823 | orchestrator | 2026-02-05 05:53:48.719831 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-05 05:53:48.719839 | orchestrator | Thursday 05 February 2026 05:53:36 +0000 (0:00:02.404) 1:13:21.231 ***** 2026-02-05 05:53:48.719846 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:53:48.719853 | orchestrator | 2026-02-05 05:53:48.719860 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-02-05 05:53:48.719867 | orchestrator | 2026-02-05 05:53:48.719874 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-02-05 05:53:48.719881 | orchestrator | Thursday 05 February 2026 05:53:38 +0000 (0:00:01.862) 1:13:23.093 ***** 2026-02-05 05:53:48.719888 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:53:48.719896 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:53:48.719903 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:53:48.719910 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:53:48.719917 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:53:48.719924 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:53:48.719932 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:53:48.719939 | orchestrator | 2026-02-05 05:53:48.719946 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:53:48.719958 | orchestrator | Thursday 05 February 2026 05:53:40 +0000 (0:00:02.260) 1:13:25.354 ***** 2026-02-05 05:53:48.719966 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:53:48.719973 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:53:48.719980 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:53:48.719987 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:53:48.719994 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:53:48.720001 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:53:48.720009 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:53:48.720016 | orchestrator | 2026-02-05 05:53:48.720024 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-05 05:53:48.720031 | orchestrator | Thursday 05 February 2026 05:53:42 +0000 (0:00:02.465) 1:13:27.819 ***** 2026-02-05 05:53:48.720038 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:53:48.720045 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:53:48.720052 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:53:48.720059 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:53:48.720067 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:53:48.720074 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:53:48.720081 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:53:48.720088 | orchestrator | 2026-02-05 05:53:48.720095 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-05 05:53:48.720103 | orchestrator | Thursday 05 February 2026 05:53:45 +0000 (0:00:02.419) 1:13:30.239 ***** 2026-02-05 05:53:48.720110 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:53:48.720117 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:53:48.720124 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:53:48.720131 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:53:48.720139 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:53:48.720146 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:53:48.720153 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:53:48.720161 | orchestrator | 2026-02-05 05:53:48.720168 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-02-05 05:53:48.720175 | orchestrator | Thursday 05 February 2026 05:53:47 +0000 (0:00:02.450) 1:13:32.689 ***** 2026-02-05 05:53:48.720183 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:53:48.720190 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:53:48.720197 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:53:48.720210 | orchestrator | skipping: [testbed-node-3] 2026-02-05 05:54:37.646659 | orchestrator | skipping: [testbed-node-4] 2026-02-05 05:54:37.646755 | orchestrator | skipping: [testbed-node-5] 2026-02-05 05:54:37.646763 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.646768 | orchestrator | 2026-02-05 05:54:37.646774 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-02-05 05:54:37.646779 | orchestrator | 2026-02-05 05:54:37.646783 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-02-05 05:54:37.646788 | orchestrator | Thursday 05 February 2026 05:53:50 +0000 (0:00:02.885) 1:13:35.575 ***** 2026-02-05 05:54:37.646793 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-02-05 05:54:37.646798 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-02-05 05:54:37.646802 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-02-05 05:54:37.646806 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.646810 | orchestrator | 2026-02-05 05:54:37.646814 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-05 05:54:37.646818 | orchestrator | Thursday 05 February 2026 05:53:51 +0000 (0:00:01.128) 1:13:36.704 ***** 2026-02-05 05:54:37.646822 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.646825 | orchestrator | 2026-02-05 05:54:37.646829 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-05 05:54:37.646833 | orchestrator | Thursday 05 February 2026 05:53:53 +0000 (0:00:01.275) 1:13:37.980 ***** 2026-02-05 05:54:37.646837 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.646857 | orchestrator | 2026-02-05 05:54:37.646861 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-05 05:54:37.646865 | orchestrator | Thursday 05 February 2026 05:53:54 +0000 (0:00:01.115) 1:13:39.095 ***** 2026-02-05 05:54:37.646869 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.646872 | orchestrator | 2026-02-05 05:54:37.646876 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-05 05:54:37.646880 | orchestrator | Thursday 05 February 2026 05:53:55 +0000 (0:00:01.123) 1:13:40.218 ***** 2026-02-05 05:54:37.646884 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.646887 | orchestrator | 2026-02-05 05:54:37.646891 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-02-05 05:54:37.646905 | orchestrator | Thursday 05 February 2026 05:53:56 +0000 (0:00:01.118) 1:13:41.337 ***** 2026-02-05 05:54:37.646909 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-02-05 05:54:37.646913 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-02-05 05:54:37.646917 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.646920 | orchestrator | 2026-02-05 05:54:37.646924 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-02-05 05:54:37.646928 | orchestrator | Thursday 05 February 2026 05:53:57 +0000 (0:00:01.103) 1:13:42.440 ***** 2026-02-05 05:54:37.646932 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.646935 | orchestrator | 2026-02-05 05:54:37.646939 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-02-05 05:54:37.646943 | orchestrator | Thursday 05 February 2026 05:53:58 +0000 (0:00:01.115) 1:13:43.556 ***** 2026-02-05 05:54:37.646947 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.646951 | orchestrator | 2026-02-05 05:54:37.646955 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-02-05 05:54:37.646959 | orchestrator | Thursday 05 February 2026 05:53:59 +0000 (0:00:01.146) 1:13:44.703 ***** 2026-02-05 05:54:37.646963 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.646966 | orchestrator | 2026-02-05 05:54:37.646970 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-02-05 05:54:37.647024 | orchestrator | Thursday 05 February 2026 05:54:01 +0000 (0:00:01.151) 1:13:45.854 ***** 2026-02-05 05:54:37.647042 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-02-05 05:54:37.647046 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-02-05 05:54:37.647050 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.647054 | orchestrator | 2026-02-05 05:54:37.647084 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-02-05 05:54:37.647094 | orchestrator | Thursday 05 February 2026 05:54:02 +0000 (0:00:01.365) 1:13:47.220 ***** 2026-02-05 05:54:37.647140 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.647147 | orchestrator | 2026-02-05 05:54:37.647153 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-02-05 05:54:37.647160 | orchestrator | Thursday 05 February 2026 05:54:03 +0000 (0:00:01.100) 1:13:48.320 ***** 2026-02-05 05:54:37.647166 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.647171 | orchestrator | 2026-02-05 05:54:37.647178 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-02-05 05:54:37.647184 | orchestrator | Thursday 05 February 2026 05:54:04 +0000 (0:00:01.098) 1:13:49.419 ***** 2026-02-05 05:54:37.647190 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.647196 | orchestrator | 2026-02-05 05:54:37.647202 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-02-05 05:54:37.647208 | orchestrator | Thursday 05 February 2026 05:54:05 +0000 (0:00:01.140) 1:13:50.559 ***** 2026-02-05 05:54:37.647214 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:54:37.647220 | orchestrator | 2026-02-05 05:54:37.647226 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-02-05 05:54:37.647232 | orchestrator | 2026-02-05 05:54:37.647239 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 05:54:37.647253 | orchestrator | Thursday 05 February 2026 05:54:07 +0000 (0:00:01.655) 1:13:52.214 ***** 2026-02-05 05:54:37.647260 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:54:37.647265 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:54:37.647270 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:54:37.647274 | orchestrator | 2026-02-05 05:54:37.647278 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-05 05:54:37.647283 | orchestrator | Thursday 05 February 2026 05:54:09 +0000 (0:00:01.662) 1:13:53.877 ***** 2026-02-05 05:54:37.647287 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:54:37.647292 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:54:37.647309 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:54:37.647314 | orchestrator | 2026-02-05 05:54:37.647318 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-05 05:54:37.647323 | orchestrator | Thursday 05 February 2026 05:54:10 +0000 (0:00:01.393) 1:13:55.271 ***** 2026-02-05 05:54:37.647327 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:54:37.647332 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:54:37.647336 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:54:37.647358 | orchestrator | 2026-02-05 05:54:37.647363 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-05 05:54:37.647367 | orchestrator | Thursday 05 February 2026 05:54:11 +0000 (0:00:01.382) 1:13:56.653 ***** 2026-02-05 05:54:37.647371 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:54:37.647376 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:54:37.647380 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:54:37.647385 | orchestrator | 2026-02-05 05:54:37.647389 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-05 05:54:37.647393 | orchestrator | Thursday 05 February 2026 05:54:13 +0000 (0:00:01.393) 1:13:58.047 ***** 2026-02-05 05:54:37.647398 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:54:37.647402 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:54:37.647406 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:54:37.647411 | orchestrator | 2026-02-05 05:54:37.647415 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-02-05 05:54:37.647419 | orchestrator | Thursday 05 February 2026 05:54:14 +0000 (0:00:01.311) 1:13:59.358 ***** 2026-02-05 05:54:37.647423 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:54:37.647428 | orchestrator | skipping: [testbed-node-1] 2026-02-05 05:54:37.647432 | orchestrator | skipping: [testbed-node-2] 2026-02-05 05:54:37.647436 | orchestrator | 2026-02-05 05:54:37.647441 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-02-05 05:54:37.647445 | orchestrator | Thursday 05 February 2026 05:54:15 +0000 (0:00:01.334) 1:14:00.693 ***** 2026-02-05 05:54:37.647449 | orchestrator | skipping: [testbed-node-0] 2026-02-05 05:54:37.647454 | orchestrator | 2026-02-05 05:54:37.647458 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-02-05 05:54:37.647463 | orchestrator | 2026-02-05 05:54:37.647470 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 05:54:37.647519 | orchestrator | Thursday 05 February 2026 05:54:17 +0000 (0:00:01.807) 1:14:02.501 ***** 2026-02-05 05:54:37.647528 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:54:37.647535 | orchestrator | 2026-02-05 05:54:37.647542 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 05:54:37.647548 | orchestrator | Thursday 05 February 2026 05:54:19 +0000 (0:00:01.511) 1:14:04.013 ***** 2026-02-05 05:54:37.647554 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:54:37.647560 | orchestrator | 2026-02-05 05:54:37.647567 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-02-05 05:54:37.647619 | orchestrator | Thursday 05 February 2026 05:54:20 +0000 (0:00:01.139) 1:14:05.152 ***** 2026-02-05 05:54:37.647625 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:54:37.647629 | orchestrator | 2026-02-05 05:54:37.647632 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-02-05 05:54:37.647642 | orchestrator | Thursday 05 February 2026 05:54:21 +0000 (0:00:01.158) 1:14:06.310 ***** 2026-02-05 05:54:37.647646 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:54:37.647650 | orchestrator | 2026-02-05 05:54:37.647654 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-02-05 05:54:37.647657 | orchestrator | Thursday 05 February 2026 05:54:24 +0000 (0:00:03.038) 1:14:09.349 ***** 2026-02-05 05:54:37.647661 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:54:37.647665 | orchestrator | 2026-02-05 05:54:37.647669 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-02-05 05:54:37.647673 | orchestrator | Thursday 05 February 2026 05:54:28 +0000 (0:00:03.656) 1:14:13.006 ***** 2026-02-05 05:54:37.647676 | orchestrator | changed: [testbed-node-0] 2026-02-05 05:54:37.647680 | orchestrator | 2026-02-05 05:54:37.647684 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-02-05 05:54:37.647701 | orchestrator | 2026-02-05 05:54:37.647705 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-02-05 05:54:37.647709 | orchestrator | Thursday 05 February 2026 05:54:30 +0000 (0:00:01.830) 1:14:14.836 ***** 2026-02-05 05:54:37.647712 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:54:37.647743 | orchestrator | ok: [testbed-node-1] 2026-02-05 05:54:37.647747 | orchestrator | ok: [testbed-node-2] 2026-02-05 05:54:37.647751 | orchestrator | 2026-02-05 05:54:37.647755 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-02-05 05:54:37.647759 | orchestrator | Thursday 05 February 2026 05:54:31 +0000 (0:00:01.775) 1:14:16.612 ***** 2026-02-05 05:54:37.647762 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:54:37.647766 | orchestrator | 2026-02-05 05:54:37.647770 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-02-05 05:54:37.647774 | orchestrator | Thursday 05 February 2026 05:54:34 +0000 (0:00:02.457) 1:14:19.069 ***** 2026-02-05 05:54:37.647777 | orchestrator | ok: [testbed-node-0] 2026-02-05 05:54:37.647781 | orchestrator | 2026-02-05 05:54:37.647785 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 05:54:37.647802 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 05:54:37.647808 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-02-05 05:54:37.647814 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-02-05 05:54:37.647837 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-02-05 05:54:37.647848 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-02-05 05:54:38.082058 | orchestrator | testbed-node-3 : ok=316  changed=21  unreachable=0 failed=0 skipped=355  rescued=0 ignored=0 2026-02-05 05:54:38.082137 | orchestrator | testbed-node-4 : ok=302  changed=17  unreachable=0 failed=0 skipped=338  rescued=0 ignored=0 2026-02-05 05:54:38.082145 | orchestrator | testbed-node-5 : ok=309  changed=16  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-02-05 05:54:38.082151 | orchestrator | 2026-02-05 05:54:38.082157 | orchestrator | 2026-02-05 05:54:38.082163 | orchestrator | 2026-02-05 05:54:38.082168 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 05:54:38.082175 | orchestrator | Thursday 05 February 2026 05:54:37 +0000 (0:00:03.371) 1:14:22.441 ***** 2026-02-05 05:54:38.082181 | orchestrator | =============================================================================== 2026-02-05 05:54:38.082206 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 78.90s 2026-02-05 05:54:38.082212 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 76.80s 2026-02-05 05:54:38.082217 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 35.92s 2026-02-05 05:54:38.082222 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 35.73s 2026-02-05 05:54:38.082227 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 34.46s 2026-02-05 05:54:38.082232 | orchestrator | Gather and delegate facts ---------------------------------------------- 31.95s 2026-02-05 05:54:38.082237 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 30.37s 2026-02-05 05:54:38.082242 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 28.62s 2026-02-05 05:54:38.082257 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 28.36s 2026-02-05 05:54:38.082263 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 23.21s 2026-02-05 05:54:38.082268 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.02s 2026-02-05 05:54:38.082273 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.89s 2026-02-05 05:54:38.082280 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.56s 2026-02-05 05:54:38.082289 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.20s 2026-02-05 05:54:38.082297 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.63s 2026-02-05 05:54:38.082305 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 13.07s 2026-02-05 05:54:38.082315 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.99s 2026-02-05 05:54:38.082323 | orchestrator | Stop ceph osd ---------------------------------------------------------- 12.00s 2026-02-05 05:54:38.082331 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.64s 2026-02-05 05:54:38.082339 | orchestrator | Set cluster configs ---------------------------------------------------- 11.20s 2026-02-05 05:54:38.270460 | orchestrator | + osism apply cephclient 2026-02-05 05:54:40.020514 | orchestrator | 2026-02-05 05:54:40 | INFO  | Task c8279b55-a5ca-42bb-a53f-2e5b182468ad (cephclient) was prepared for execution. 2026-02-05 05:54:40.020626 | orchestrator | 2026-02-05 05:54:40 | INFO  | It takes a moment until task c8279b55-a5ca-42bb-a53f-2e5b182468ad (cephclient) has been started and output is visible here. 2026-02-05 05:55:06.940815 | orchestrator | 2026-02-05 05:55:06.940951 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-05 05:55:06.940978 | orchestrator | 2026-02-05 05:55:06.940995 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-05 05:55:06.941013 | orchestrator | Thursday 05 February 2026 05:54:46 +0000 (0:00:02.587) 0:00:02.587 ***** 2026-02-05 05:55:06.941034 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-05 05:55:06.941059 | orchestrator | 2026-02-05 05:55:06.941076 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-05 05:55:06.941091 | orchestrator | Thursday 05 February 2026 05:54:48 +0000 (0:00:01.838) 0:00:04.426 ***** 2026-02-05 05:55:06.941108 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-05 05:55:06.941123 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-05 05:55:06.941140 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-05 05:55:06.941157 | orchestrator | 2026-02-05 05:55:06.941173 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-05 05:55:06.941189 | orchestrator | Thursday 05 February 2026 05:54:51 +0000 (0:00:02.502) 0:00:06.929 ***** 2026-02-05 05:55:06.941206 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-05 05:55:06.941256 | orchestrator | 2026-02-05 05:55:06.941273 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-05 05:55:06.941288 | orchestrator | Thursday 05 February 2026 05:54:53 +0000 (0:00:01.876) 0:00:08.805 ***** 2026-02-05 05:55:06.941303 | orchestrator | ok: [testbed-manager] 2026-02-05 05:55:06.941319 | orchestrator | 2026-02-05 05:55:06.941335 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-05 05:55:06.941353 | orchestrator | Thursday 05 February 2026 05:54:54 +0000 (0:00:01.666) 0:00:10.472 ***** 2026-02-05 05:55:06.941370 | orchestrator | ok: [testbed-manager] 2026-02-05 05:55:06.941388 | orchestrator | 2026-02-05 05:55:06.941401 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-05 05:55:06.941412 | orchestrator | Thursday 05 February 2026 05:54:56 +0000 (0:00:01.723) 0:00:12.195 ***** 2026-02-05 05:55:06.941428 | orchestrator | ok: [testbed-manager] 2026-02-05 05:55:06.941443 | orchestrator | 2026-02-05 05:55:06.941453 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-05 05:55:06.941462 | orchestrator | Thursday 05 February 2026 05:54:58 +0000 (0:00:01.870) 0:00:14.066 ***** 2026-02-05 05:55:06.941501 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-05 05:55:06.941513 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-02-05 05:55:06.941528 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-05 05:55:06.941541 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-05 05:55:06.941551 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-05 05:55:06.941560 | orchestrator | 2026-02-05 05:55:06.941571 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-05 05:55:06.941581 | orchestrator | Thursday 05 February 2026 05:55:02 +0000 (0:00:04.376) 0:00:18.442 ***** 2026-02-05 05:55:06.941591 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-05 05:55:06.941600 | orchestrator | 2026-02-05 05:55:06.941610 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-05 05:55:06.941619 | orchestrator | Thursday 05 February 2026 05:55:04 +0000 (0:00:01.343) 0:00:19.786 ***** 2026-02-05 05:55:06.941629 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:55:06.941639 | orchestrator | 2026-02-05 05:55:06.941649 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-05 05:55:06.941659 | orchestrator | Thursday 05 February 2026 05:55:05 +0000 (0:00:01.103) 0:00:20.890 ***** 2026-02-05 05:55:06.941668 | orchestrator | skipping: [testbed-manager] 2026-02-05 05:55:06.941678 | orchestrator | 2026-02-05 05:55:06.941687 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 05:55:06.941712 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 05:55:06.941723 | orchestrator | 2026-02-05 05:55:06.941734 | orchestrator | 2026-02-05 05:55:06.941744 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 05:55:06.941755 | orchestrator | Thursday 05 February 2026 05:55:06 +0000 (0:00:01.462) 0:00:22.352 ***** 2026-02-05 05:55:06.941771 | orchestrator | =============================================================================== 2026-02-05 05:55:06.941787 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.38s 2026-02-05 05:55:06.941803 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.50s 2026-02-05 05:55:06.941825 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.88s 2026-02-05 05:55:06.941845 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 1.87s 2026-02-05 05:55:06.941859 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.84s 2026-02-05 05:55:06.941874 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.72s 2026-02-05 05:55:06.941889 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.67s 2026-02-05 05:55:06.941917 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.46s 2026-02-05 05:55:06.941931 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.34s 2026-02-05 05:55:06.941946 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.10s 2026-02-05 05:55:07.256955 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-05 05:55:07.257073 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-02-05 05:55:07.263566 | orchestrator | + set -e 2026-02-05 05:55:07.263789 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 05:55:07.263809 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 05:55:07.263819 | orchestrator | ++ INTERACTIVE=false 2026-02-05 05:55:07.263827 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 05:55:07.263835 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 05:55:07.263843 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 05:55:07.263851 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 05:55:07.263859 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 05:55:07.263866 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 05:55:07.263874 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 05:55:07.263882 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 05:55:07.263890 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 05:55:07.263898 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-05 05:55:07.263906 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-05 05:55:07.263915 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 05:55:07.263922 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 05:55:07.263930 | orchestrator | ++ export ARA=false 2026-02-05 05:55:07.263939 | orchestrator | ++ ARA=false 2026-02-05 05:55:07.263947 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 05:55:07.263955 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 05:55:07.263962 | orchestrator | ++ export TEMPEST=false 2026-02-05 05:55:07.263970 | orchestrator | ++ TEMPEST=false 2026-02-05 05:55:07.263978 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 05:55:07.263986 | orchestrator | ++ IS_ZUUL=true 2026-02-05 05:55:07.263994 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 05:55:07.264002 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.180 2026-02-05 05:55:07.264010 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 05:55:07.264018 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 05:55:07.264025 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 05:55:07.264033 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 05:55:07.264041 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 05:55:07.264049 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 05:55:07.264057 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 05:55:07.264065 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 05:55:07.264072 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-05 05:55:07.264081 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-05 05:55:07.264089 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-05 05:55:07.264238 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-05 05:55:07.269046 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-05 05:55:07.269128 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-05 05:55:07.269148 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-05 05:55:07.269165 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-02-05 05:55:25.646377 | orchestrator | 2026-02-05 05:55:25 | ERROR  | Unable to get ansible vault password 2026-02-05 05:55:25.646527 | orchestrator | 2026-02-05 05:55:25 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-05 05:55:25.646545 | orchestrator | 2026-02-05 05:55:25 | ERROR  | Dropping encrypted entries 2026-02-05 05:55:25.676286 | orchestrator | 2026-02-05 05:55:25 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-05 05:55:25.676986 | orchestrator | 2026-02-05 05:55:25 | INFO  | Kolla configuration check passed 2026-02-05 05:55:25.874797 | orchestrator | 2026-02-05 05:55:25 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-02-05 05:55:25.892930 | orchestrator | 2026-02-05 05:55:25 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-02-05 05:55:26.125986 | orchestrator | + osism migrate rabbitmq3to4 list 2026-02-05 05:55:43.334062 | orchestrator | 2026-02-05 05:55:43 | ERROR  | Unable to get ansible vault password 2026-02-05 05:55:43.334138 | orchestrator | 2026-02-05 05:55:43 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-05 05:55:43.334146 | orchestrator | 2026-02-05 05:55:43 | ERROR  | Dropping encrypted entries 2026-02-05 05:55:43.364957 | orchestrator | 2026-02-05 05:55:43 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-05 05:55:43.545960 | orchestrator | 2026-02-05 05:55:43 | INFO  | Found 208 classic queue(s) in vhost '/': 2026-02-05 05:55:43.546183 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-02-05 05:55:43.546199 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-02-05 05:55:43.546210 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-02-05 05:55:43.546221 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-02-05 05:55:43.546231 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - barbican.workers_fanout_317ae58276fc4f32857d4c686a9f7c41 (vhost: /, messages: 0) 2026-02-05 05:55:43.546245 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - barbican.workers_fanout_4c52fe5129bd4f66b2e838586aeaf86f (vhost: /, messages: 0) 2026-02-05 05:55:43.546283 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - barbican.workers_fanout_af1ef2d57e354d9390224a696cc47d32 (vhost: /, messages: 0) 2026-02-05 05:55:43.546305 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-02-05 05:55:43.546320 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - central (vhost: /, messages: 1) 2026-02-05 05:55:43.546555 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.546585 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.547034 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.547147 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - central_fanout_45ee531bd12a49ca87125738a77d1f63 (vhost: /, messages: 0) 2026-02-05 05:55:43.547162 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - central_fanout_4f87ed9a78764be29921079e706fc3b4 (vhost: /, messages: 0) 2026-02-05 05:55:43.547177 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - central_fanout_9045be3723d64d7da63ad418eed42f00 (vhost: /, messages: 0) 2026-02-05 05:55:43.547579 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - central_fanout_a5118cfa8e364460a3a860eb520abda8 (vhost: /, messages: 0) 2026-02-05 05:55:43.547623 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - central_fanout_c888f6d0d22e411dae70262806e535ff (vhost: /, messages: 0) 2026-02-05 05:55:43.547858 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - central_fanout_cc8d8d16c4d74114a61a75958ce97ecf (vhost: /, messages: 0) 2026-02-05 05:55:43.548089 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-02-05 05:55:43.548336 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.548361 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.548455 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.548740 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-backup_fanout_28faa6165cc34ac7ab72974298190dd1 (vhost: /, messages: 0) 2026-02-05 05:55:43.548800 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-backup_fanout_6dbeb67a249947a59247fd0837aaac14 (vhost: /, messages: 0) 2026-02-05 05:55:43.549051 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-backup_fanout_a96d1cfacdc647e29426252aea2fda1b (vhost: /, messages: 0) 2026-02-05 05:55:43.549071 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-02-05 05:55:43.549079 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.549168 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.549433 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.549547 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-scheduler_fanout_3d1f479766034126a8ec0be13414a25e (vhost: /, messages: 0) 2026-02-05 05:55:43.549565 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-scheduler_fanout_8be5fa3ea2934814bbabdb6aa899d1b9 (vhost: /, messages: 0) 2026-02-05 05:55:43.549748 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-scheduler_fanout_bff1e89eb79f44689ed40a903c097d3f (vhost: /, messages: 0) 2026-02-05 05:55:43.549760 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-02-05 05:55:43.549941 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-02-05 05:55:43.550147 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.550165 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_34fef4a331ea437c9efd7cdb900ecfc2 (vhost: /, messages: 0) 2026-02-05 05:55:43.550295 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-02-05 05:55:43.550443 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.550575 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_14b685e26eaa4833acd076c62b5c5329 (vhost: /, messages: 0) 2026-02-05 05:55:43.550680 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-02-05 05:55:43.550772 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.550940 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_6f78af5ba96c4cb6939fa59038b5b8d0 (vhost: /, messages: 0) 2026-02-05 05:55:43.551035 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume_fanout_4810ab2dc6474b26a3a2ff9d1237e046 (vhost: /, messages: 0) 2026-02-05 05:55:43.551185 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume_fanout_62d8d9a526a54f7c81060666faafcddb (vhost: /, messages: 0) 2026-02-05 05:55:43.551291 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - cinder-volume_fanout_7ac31efdfde84b29b44bbac00cc6bbef (vhost: /, messages: 0) 2026-02-05 05:55:43.551434 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - compute (vhost: /, messages: 0) 2026-02-05 05:55:43.551549 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-02-05 05:55:43.551830 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-02-05 05:55:43.551881 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-02-05 05:55:43.551968 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - compute_fanout_3309231819a9449cb826cc68b0858be2 (vhost: /, messages: 0) 2026-02-05 05:55:43.552058 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - compute_fanout_339df6632c1244aeb13b3e09e5212fa4 (vhost: /, messages: 0) 2026-02-05 05:55:43.552271 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - compute_fanout_9319d052beb34f9da9d97118576aa847 (vhost: /, messages: 0) 2026-02-05 05:55:43.552282 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - conductor (vhost: /, messages: 0) 2026-02-05 05:55:43.552582 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.552644 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.552656 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.553210 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - conductor_fanout_145d9efd1c4e4dc783c5b5d0bd8f0c00 (vhost: /, messages: 0) 2026-02-05 05:55:43.553373 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - conductor_fanout_3c6d26d45b3d4b65be3efa3d9401b7f9 (vhost: /, messages: 0) 2026-02-05 05:55:43.553627 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - conductor_fanout_6c2262e0266f4badbe8e9602edd608a5 (vhost: /, messages: 0) 2026-02-05 05:55:43.553645 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - conductor_fanout_74ede6cf67af4807b78b5ebf7b49b236 (vhost: /, messages: 0) 2026-02-05 05:55:43.553655 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - conductor_fanout_8a90d976f65049de865d395939ea30fa (vhost: /, messages: 0) 2026-02-05 05:55:43.553678 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - conductor_fanout_e887e8086f9845849b0b8ffe1d532479 (vhost: /, messages: 0) 2026-02-05 05:55:43.553694 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - event.sample (vhost: /, messages: 4) 2026-02-05 05:55:43.553707 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-02-05 05:55:43.553720 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor.mcvqfg6p5ysq (vhost: /, messages: 0) 2026-02-05 05:55:43.553734 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor.mr7vakp5f7db (vhost: /, messages: 0) 2026-02-05 05:55:43.553904 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor.rqnam3kngo57 (vhost: /, messages: 0) 2026-02-05 05:55:43.553921 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor_fanout_191cd104551d4f04809472461dc19d3e (vhost: /, messages: 0) 2026-02-05 05:55:43.554108 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor_fanout_46c53208d9b34d1baadeb3b08aa66289 (vhost: /, messages: 0) 2026-02-05 05:55:43.554133 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor_fanout_60fa6528d7004d3eb9c0ad13fe42c826 (vhost: /, messages: 0) 2026-02-05 05:55:43.554359 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor_fanout_8dd8e20a4e8a4a20ab2e8b0fbb0d36ce (vhost: /, messages: 0) 2026-02-05 05:55:43.554442 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor_fanout_999742d67f374fd19d56a5fcf03b163d (vhost: /, messages: 0) 2026-02-05 05:55:43.554505 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor_fanout_9fa8be22c44143cbb700326fba2974d3 (vhost: /, messages: 0) 2026-02-05 05:55:43.554603 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor_fanout_a9a8dbc094464fbe8a3477c839f8d9ac (vhost: /, messages: 0) 2026-02-05 05:55:43.554697 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor_fanout_c62f6bdf87284c66afd179e2af1f1a85 (vhost: /, messages: 0) 2026-02-05 05:55:43.554768 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - magnum-conductor_fanout_f059f627a6f749098f7354909681d606 (vhost: /, messages: 0) 2026-02-05 05:55:43.554893 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-02-05 05:55:43.555010 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.555211 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.555236 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.555553 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-data_fanout_6c9be36d04804eb2983e1557b43fb789 (vhost: /, messages: 0) 2026-02-05 05:55:43.555580 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-data_fanout_75d085e7c9e14304923ca364e0a42ca1 (vhost: /, messages: 0) 2026-02-05 05:55:43.555599 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-data_fanout_e34db6e15b6041ef8e3901398aff022d (vhost: /, messages: 0) 2026-02-05 05:55:43.555697 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-02-05 05:55:43.555909 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.556023 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.556034 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.556180 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-scheduler_fanout_067768925db84a1e8b49021b165c4779 (vhost: /, messages: 0) 2026-02-05 05:55:43.556247 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-scheduler_fanout_3b40aac9f03e4c54a8bb8fcc95ab22b7 (vhost: /, messages: 0) 2026-02-05 05:55:43.556257 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-scheduler_fanout_bdebbe646a654b0c9ba44b0ee1bc38c8 (vhost: /, messages: 0) 2026-02-05 05:55:43.556358 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-02-05 05:55:43.556500 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-02-05 05:55:43.556578 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-02-05 05:55:43.556662 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-02-05 05:55:43.556810 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-share_fanout_04db51a53ab547b18865a32d521f5ca6 (vhost: /, messages: 0) 2026-02-05 05:55:43.556821 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-share_fanout_3c22949ed8ea4b49903c75676f33797c (vhost: /, messages: 0) 2026-02-05 05:55:43.556909 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - manila-share_fanout_4773630eaa47441593db8b23d902138d (vhost: /, messages: 0) 2026-02-05 05:55:43.557103 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-02-05 05:55:43.557203 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-02-05 05:55:43.557219 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-02-05 05:55:43.557319 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-02-05 05:55:43.557536 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-02-05 05:55:43.557559 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-02-05 05:55:43.557702 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-02-05 05:55:43.557717 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-02-05 05:55:43.557909 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.558010 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.558186 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.558288 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - octavia_provisioning_v2_fanout_490f748cbc28450eb1200ba2fd435772 (vhost: /, messages: 0) 2026-02-05 05:55:43.558304 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - octavia_provisioning_v2_fanout_ab4bd5d46df749b8aec7ac589d6f663c (vhost: /, messages: 0) 2026-02-05 05:55:43.558368 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - octavia_provisioning_v2_fanout_f789ccfc691f45ce8360a711d562cfed (vhost: /, messages: 0) 2026-02-05 05:55:43.558382 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - producer (vhost: /, messages: 0) 2026-02-05 05:55:43.558456 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.558577 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.558643 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.558849 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - producer_fanout_59baf8e9e82a410ea9dedb8dd344a27e (vhost: /, messages: 0) 2026-02-05 05:55:43.558866 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - producer_fanout_7a44fe4b3af74042883c5670ff666a5c (vhost: /, messages: 0) 2026-02-05 05:55:43.559005 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - producer_fanout_82c7363c4e1f4a3e909738f99ed0bf05 (vhost: /, messages: 0) 2026-02-05 05:55:43.559019 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - producer_fanout_a2c492eb7fca4983be444a42edf04726 (vhost: /, messages: 0) 2026-02-05 05:55:43.559566 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - producer_fanout_d7327df65a2f4ac5a4a8d11cd90424e6 (vhost: /, messages: 0) 2026-02-05 05:55:43.559584 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - producer_fanout_f8e73fb835c84d2abc25bf36ea19f0ad (vhost: /, messages: 0) 2026-02-05 05:55:43.559593 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-02-05 05:55:43.559601 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.559609 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.559617 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.559625 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin_fanout_19116e0423814a669ce4ae3fd820652e (vhost: /, messages: 0) 2026-02-05 05:55:43.560184 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin_fanout_2efdcaa64ce342449e4812e832aaff35 (vhost: /, messages: 0) 2026-02-05 05:55:43.560343 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin_fanout_4017d075948046218d89d3ccab4b999a (vhost: /, messages: 0) 2026-02-05 05:55:43.560418 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin_fanout_5402a705d9a446acb73513fa524c8c9d (vhost: /, messages: 0) 2026-02-05 05:55:43.560428 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin_fanout_7dc3a9d628f54881b80f9a11643fe8af (vhost: /, messages: 0) 2026-02-05 05:55:43.560436 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin_fanout_a17f1f62886d43a5b35cf70a71125fe6 (vhost: /, messages: 0) 2026-02-05 05:55:43.560459 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin_fanout_a3f510dd73884d63a0c36d83d294095e (vhost: /, messages: 0) 2026-02-05 05:55:43.560619 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin_fanout_a9f7f18838524f05a0379d6796386be1 (vhost: /, messages: 0) 2026-02-05 05:55:43.560697 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-plugin_fanout_d7ba1ea8fe394fd8adefae54bb5ebee5 (vhost: /, messages: 0) 2026-02-05 05:55:43.560707 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-02-05 05:55:43.560720 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.560729 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.561332 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.561396 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_1638e943f5884155bcad1cf249ab54fa (vhost: /, messages: 0) 2026-02-05 05:55:43.561409 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_17c20626dd32469db3b8cffcc34dc31f (vhost: /, messages: 0) 2026-02-05 05:55:43.561420 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_614ff603acf24ddf96428d27a48158a8 (vhost: /, messages: 0) 2026-02-05 05:55:43.561439 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_6153d99c178b4cd4a41cc8b129e044e4 (vhost: /, messages: 0) 2026-02-05 05:55:43.561450 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_62fcddd1357d4400b003c1057ec08495 (vhost: /, messages: 0) 2026-02-05 05:55:43.561626 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_68831f36db534c10aef9e5bf937514b2 (vhost: /, messages: 0) 2026-02-05 05:55:43.561715 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_6980eccca48a43c398c2d18ccc6f696c (vhost: /, messages: 0) 2026-02-05 05:55:43.561798 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_7953478421434b5c897654eca4dfe84e (vhost: /, messages: 0) 2026-02-05 05:55:43.561811 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_831a71e728064077bbe214823ecb3c90 (vhost: /, messages: 0) 2026-02-05 05:55:43.561822 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_963074059c6d414db5554cc0853f1946 (vhost: /, messages: 0) 2026-02-05 05:55:43.561839 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_af91e49a52a247648a5b2a37d1ddde9b (vhost: /, messages: 0) 2026-02-05 05:55:43.561851 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_bc667fbeb81f436995a93143b9a71f0f (vhost: /, messages: 0) 2026-02-05 05:55:43.561862 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_c551f2e0a7d3462c863bf769a4843824 (vhost: /, messages: 0) 2026-02-05 05:55:43.562185 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_cd00ce4f5d004cc49ac64691e1bcfa24 (vhost: /, messages: 0) 2026-02-05 05:55:43.562214 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_d7a2a5bff42f4c169cb22b8a9f55f961 (vhost: /, messages: 0) 2026-02-05 05:55:43.562246 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_dcb87f20b9674079b19a2ba8a7ec8304 (vhost: /, messages: 0) 2026-02-05 05:55:43.562257 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_e491e2f8109f4f04a54999d70dbeba3d (vhost: /, messages: 0) 2026-02-05 05:55:43.562428 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-reports-plugin_fanout_f62a4630b17948cd8021ac32a64d1196 (vhost: /, messages: 0) 2026-02-05 05:55:43.562447 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-02-05 05:55:43.562458 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.562706 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.562728 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.562743 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions_fanout_2d11d10545034d15a0de47a9d85f9a6d (vhost: /, messages: 0) 2026-02-05 05:55:43.563187 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions_fanout_64f064e80d634417b55143c6ddd7ef58 (vhost: /, messages: 0) 2026-02-05 05:55:43.563559 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions_fanout_a6cf8540dad14f888e35154c301fb296 (vhost: /, messages: 0) 2026-02-05 05:55:43.563584 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions_fanout_ab8f325b8c494c87a53e98e2cfeda5b2 (vhost: /, messages: 0) 2026-02-05 05:55:43.563679 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions_fanout_ccc486ef9a6a40ba9c12cfdee8b0052e (vhost: /, messages: 0) 2026-02-05 05:55:43.563694 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions_fanout_d225bce249dd461496a7796348a65711 (vhost: /, messages: 0) 2026-02-05 05:55:43.563713 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions_fanout_d3cfc370d8d6449499bbd8b9ca4103ea (vhost: /, messages: 0) 2026-02-05 05:55:43.563721 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions_fanout_d5a98014a1304d0a8ba8618c1389c4e9 (vhost: /, messages: 0) 2026-02-05 05:55:43.563729 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - q-server-resource-versions_fanout_fca164bfe7ed4c8dba593e96bddba289 (vhost: /, messages: 0) 2026-02-05 05:55:43.563737 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_09b181c22a274dad9183b3d4c9422dfd (vhost: /, messages: 0) 2026-02-05 05:55:43.563745 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_0f5bc7ac7f8c4242b54d134a4f8d7f05 (vhost: /, messages: 0) 2026-02-05 05:55:43.563814 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_124304833ab348be8717338b94a6d1ab (vhost: /, messages: 0) 2026-02-05 05:55:43.563826 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_1716e43cc1424092ac4e6c8a1ea8eb22 (vhost: /, messages: 0) 2026-02-05 05:55:43.563834 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_174d41cbc70b4a8984a3fbed2e73ea6f (vhost: /, messages: 0) 2026-02-05 05:55:43.564335 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_29278f5e34d740b095e19f81371ddf71 (vhost: /, messages: 0) 2026-02-05 05:55:43.564415 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_4ffa991de2e04f6f82e815d8da417ec9 (vhost: /, messages: 0) 2026-02-05 05:55:43.564426 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_5011045c8cc4423191a7934fa673b168 (vhost: /, messages: 0) 2026-02-05 05:55:43.564450 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_68ab64984cbc49d0ad29806f2f08007f (vhost: /, messages: 0) 2026-02-05 05:55:43.564486 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_6b701fa74e0c48a9abf969b08016e205 (vhost: /, messages: 0) 2026-02-05 05:55:43.564496 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_739e11bf9e0f457d95da937e42a5a159 (vhost: /, messages: 0) 2026-02-05 05:55:43.564519 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_7401953546514678beee9132e36cd095 (vhost: /, messages: 1) 2026-02-05 05:55:43.564528 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_8a25bbe344734f818f6523a11687e5d6 (vhost: /, messages: 0) 2026-02-05 05:55:43.564592 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_8eac109332f1422ab2fd0b398aa05edf (vhost: /, messages: 0) 2026-02-05 05:55:43.564603 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_96192ad0cb3141c4915e934e8c8c2225 (vhost: /, messages: 0) 2026-02-05 05:55:43.564691 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_a39ea9cef77e499bae5654733c6b3d4c (vhost: /, messages: 0) 2026-02-05 05:55:43.565057 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_b4fca7bb10d04eae8ce58003dd10242b (vhost: /, messages: 0) 2026-02-05 05:55:43.565076 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_e0a74482dfb64fd4ad64818822d15cf3 (vhost: /, messages: 0) 2026-02-05 05:55:43.565084 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_f189a160c07f49d5b6f2ac3df87bc681 (vhost: /, messages: 0) 2026-02-05 05:55:43.565182 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - reply_fe71308d24ae45839f0dcabeba450be6 (vhost: /, messages: 0) 2026-02-05 05:55:43.565192 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-02-05 05:55:43.565205 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.565216 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.565235 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.565371 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - scheduler_fanout_016f3f1d55964a8086b732518f9c1aab (vhost: /, messages: 0) 2026-02-05 05:55:43.565389 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - scheduler_fanout_22841700c9914b8b960b0376faa3ead5 (vhost: /, messages: 0) 2026-02-05 05:55:43.565449 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - scheduler_fanout_6cf5589ca9c14c6194651173da007121 (vhost: /, messages: 0) 2026-02-05 05:55:43.565492 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - scheduler_fanout_a5db731c32374f1cba33e392b814d9bf (vhost: /, messages: 0) 2026-02-05 05:55:43.565758 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - scheduler_fanout_a8fb16ae82e94186a25b04a781a42160 (vhost: /, messages: 0) 2026-02-05 05:55:43.565780 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - scheduler_fanout_cc5fbccecb924c58a5fd40c18fabb880 (vhost: /, messages: 0) 2026-02-05 05:55:43.565789 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - worker (vhost: /, messages: 0) 2026-02-05 05:55:43.565882 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-02-05 05:55:43.565896 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-02-05 05:55:43.565970 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-02-05 05:55:43.566052 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - worker_fanout_0b4057ac2360412cb7495bf06ae3dacf (vhost: /, messages: 0) 2026-02-05 05:55:43.566157 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - worker_fanout_50daeb58baf74c67b342260c57327efb (vhost: /, messages: 0) 2026-02-05 05:55:43.566170 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - worker_fanout_77ab1619ede84508bed211d235773604 (vhost: /, messages: 0) 2026-02-05 05:55:43.566348 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - worker_fanout_84321182f4ee480f83591d2ea6121ad9 (vhost: /, messages: 0) 2026-02-05 05:55:43.566361 | orchestrator | 2026-02-05 05:55:43 | INFO  |  - worker_fanout_bb002850796e458c83e6057a434dd55b (vhost: /, messages: 0) 2026-02-05 05:55:43.767168 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-02-05 05:55:45.188181 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-02-05 05:55:45.188289 | orchestrator | [--no-close-connections] [--quorum] 2026-02-05 05:55:45.188306 | orchestrator | [--vhost VHOST] 2026-02-05 05:55:45.188318 | orchestrator | [{list,delete,prepare,check}] 2026-02-05 05:55:45.188335 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-02-05 05:55:45.188356 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-02-05 05:55:45.767752 | orchestrator | ERROR 2026-02-05 05:55:45.767931 | orchestrator | { 2026-02-05 05:55:45.767984 | orchestrator | "delta": "2:03:56.036641", 2026-02-05 05:55:45.768016 | orchestrator | "end": "2026-02-05 05:55:45.358546", 2026-02-05 05:55:45.768046 | orchestrator | "msg": "non-zero return code", 2026-02-05 05:55:45.768075 | orchestrator | "rc": 2, 2026-02-05 05:55:45.768103 | orchestrator | "start": "2026-02-05 03:51:49.321905" 2026-02-05 05:55:45.768122 | orchestrator | } failure 2026-02-05 05:55:46.043786 | 2026-02-05 05:55:46.043925 | PLAY RECAP 2026-02-05 05:55:46.043997 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-05 05:55:46.044028 | 2026-02-05 05:55:46.300042 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-05 05:55:46.301614 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-05 05:55:47.044686 | 2026-02-05 05:55:47.044844 | PLAY [Post output play] 2026-02-05 05:55:47.062200 | 2026-02-05 05:55:47.062335 | LOOP [stage-output : Register sources] 2026-02-05 05:55:47.133227 | 2026-02-05 05:55:47.133552 | TASK [stage-output : Check sudo] 2026-02-05 05:55:48.049241 | orchestrator | sudo: a password is required 2026-02-05 05:55:48.178140 | orchestrator | ok: Runtime: 0:00:00.014871 2026-02-05 05:55:48.192368 | 2026-02-05 05:55:48.192543 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-05 05:55:48.233333 | 2026-02-05 05:55:48.233697 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-05 05:55:48.303391 | orchestrator | ok 2026-02-05 05:55:48.312474 | 2026-02-05 05:55:48.312669 | LOOP [stage-output : Ensure target folders exist] 2026-02-05 05:55:48.796233 | orchestrator | ok: "docs" 2026-02-05 05:55:48.796620 | 2026-02-05 05:55:49.059186 | orchestrator | ok: "artifacts" 2026-02-05 05:55:49.324010 | orchestrator | ok: "logs" 2026-02-05 05:55:49.346814 | 2026-02-05 05:55:49.347019 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-05 05:55:49.384294 | 2026-02-05 05:55:49.384748 | TASK [stage-output : Make all log files readable] 2026-02-05 05:55:49.758125 | orchestrator | ok 2026-02-05 05:55:49.768004 | 2026-02-05 05:55:49.768152 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-05 05:55:49.803069 | orchestrator | skipping: Conditional result was False 2026-02-05 05:55:49.819574 | 2026-02-05 05:55:49.819752 | TASK [stage-output : Discover log files for compression] 2026-02-05 05:55:49.844490 | orchestrator | skipping: Conditional result was False 2026-02-05 05:55:49.860234 | 2026-02-05 05:55:49.860392 | LOOP [stage-output : Archive everything from logs] 2026-02-05 05:55:49.903795 | 2026-02-05 05:55:49.903957 | PLAY [Post cleanup play] 2026-02-05 05:55:49.912113 | 2026-02-05 05:55:49.912219 | TASK [Set cloud fact (Zuul deployment)] 2026-02-05 05:55:49.961218 | orchestrator | ok 2026-02-05 05:55:49.969430 | 2026-02-05 05:55:49.969537 | TASK [Set cloud fact (local deployment)] 2026-02-05 05:55:49.993053 | orchestrator | skipping: Conditional result was False 2026-02-05 05:55:50.001624 | 2026-02-05 05:55:50.001738 | TASK [Clean the cloud environment] 2026-02-05 05:55:50.633071 | orchestrator | 2026-02-05 05:55:50 - clean up servers 2026-02-05 05:55:51.377735 | orchestrator | 2026-02-05 05:55:51 - testbed-manager 2026-02-05 05:55:51.460765 | orchestrator | 2026-02-05 05:55:51 - testbed-node-3 2026-02-05 05:55:51.545274 | orchestrator | 2026-02-05 05:55:51 - testbed-node-1 2026-02-05 05:55:51.646189 | orchestrator | 2026-02-05 05:55:51 - testbed-node-0 2026-02-05 05:55:51.730045 | orchestrator | 2026-02-05 05:55:51 - testbed-node-2 2026-02-05 05:55:51.821518 | orchestrator | 2026-02-05 05:55:51 - testbed-node-4 2026-02-05 05:55:51.907953 | orchestrator | 2026-02-05 05:55:51 - testbed-node-5 2026-02-05 05:55:51.995949 | orchestrator | 2026-02-05 05:55:51 - clean up keypairs 2026-02-05 05:55:52.014775 | orchestrator | 2026-02-05 05:55:52 - testbed 2026-02-05 05:55:52.040674 | orchestrator | 2026-02-05 05:55:52 - wait for servers to be gone 2026-02-05 05:56:02.911233 | orchestrator | 2026-02-05 05:56:02 - clean up ports 2026-02-05 05:56:03.096656 | orchestrator | 2026-02-05 05:56:03 - 10d07b07-cb6e-4b58-8ce7-98dd3a5b3b09 2026-02-05 05:56:03.379963 | orchestrator | 2026-02-05 05:56:03 - 4267e4ce-08dc-4cb4-a88f-fd6ad0dad61e 2026-02-05 05:56:03.596567 | orchestrator | 2026-02-05 05:56:03 - 4f003a42-b486-4b4a-9318-5c7d37920477 2026-02-05 05:56:04.107351 | orchestrator | 2026-02-05 05:56:04 - 5c5a8ea9-6229-4400-a5d1-2c55c34c99d8 2026-02-05 05:56:04.326096 | orchestrator | 2026-02-05 05:56:04 - 738ef432-e48c-4290-bc11-9fece27dc5d5 2026-02-05 05:56:04.567634 | orchestrator | 2026-02-05 05:56:04 - ae1da154-41f0-4ce9-9ce0-c9f0405f44ba 2026-02-05 05:56:04.808874 | orchestrator | 2026-02-05 05:56:04 - f1207ef6-8ec6-4e1e-ad96-28f7b2ad55e0 2026-02-05 05:56:05.035887 | orchestrator | 2026-02-05 05:56:05 - clean up volumes 2026-02-05 05:56:05.191603 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-4-node-base 2026-02-05 05:56:05.240655 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-manager-base 2026-02-05 05:56:05.285820 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-3-node-base 2026-02-05 05:56:05.327192 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-1-node-base 2026-02-05 05:56:05.380004 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-5-node-base 2026-02-05 05:56:05.425656 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-2-node-base 2026-02-05 05:56:05.466754 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-3-node-3 2026-02-05 05:56:05.509240 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-8-node-5 2026-02-05 05:56:05.550772 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-4-node-4 2026-02-05 05:56:05.590692 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-2-node-5 2026-02-05 05:56:05.633347 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-0-node-base 2026-02-05 05:56:05.677521 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-6-node-3 2026-02-05 05:56:05.720156 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-1-node-4 2026-02-05 05:56:05.765247 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-7-node-4 2026-02-05 05:56:05.807377 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-0-node-3 2026-02-05 05:56:05.985783 | orchestrator | 2026-02-05 05:56:05 - testbed-volume-5-node-5 2026-02-05 05:56:06.037032 | orchestrator | 2026-02-05 05:56:06 - disconnect routers 2026-02-05 05:56:06.107413 | orchestrator | 2026-02-05 05:56:06 - testbed 2026-02-05 05:56:07.499489 | orchestrator | 2026-02-05 05:56:07 - clean up subnets 2026-02-05 05:56:07.550325 | orchestrator | 2026-02-05 05:56:07 - subnet-testbed-management 2026-02-05 05:56:07.720213 | orchestrator | 2026-02-05 05:56:07 - clean up networks 2026-02-05 05:56:07.886100 | orchestrator | 2026-02-05 05:56:07 - net-testbed-management 2026-02-05 05:56:08.179180 | orchestrator | 2026-02-05 05:56:08 - clean up security groups 2026-02-05 05:56:08.218312 | orchestrator | 2026-02-05 05:56:08 - testbed-node 2026-02-05 05:56:08.325152 | orchestrator | 2026-02-05 05:56:08 - testbed-management 2026-02-05 05:56:08.438273 | orchestrator | 2026-02-05 05:56:08 - clean up floating ips 2026-02-05 05:56:08.473263 | orchestrator | 2026-02-05 05:56:08 - 81.163.193.180 2026-02-05 05:56:08.818565 | orchestrator | 2026-02-05 05:56:08 - clean up routers 2026-02-05 05:56:08.919956 | orchestrator | 2026-02-05 05:56:08 - testbed 2026-02-05 05:56:10.063495 | orchestrator | ok: Runtime: 0:00:19.460254 2026-02-05 05:56:10.068178 | 2026-02-05 05:56:10.068385 | PLAY RECAP 2026-02-05 05:56:10.068522 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-05 05:56:10.068623 | 2026-02-05 05:56:10.201648 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-05 05:56:10.204183 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-05 05:56:10.956268 | 2026-02-05 05:56:10.956436 | PLAY [Cleanup play] 2026-02-05 05:56:10.972393 | 2026-02-05 05:56:10.972529 | TASK [Set cloud fact (Zuul deployment)] 2026-02-05 05:56:11.025378 | orchestrator | ok 2026-02-05 05:56:11.032948 | 2026-02-05 05:56:11.033083 | TASK [Set cloud fact (local deployment)] 2026-02-05 05:56:11.067651 | orchestrator | skipping: Conditional result was False 2026-02-05 05:56:11.081747 | 2026-02-05 05:56:11.081909 | TASK [Clean the cloud environment] 2026-02-05 05:56:12.225946 | orchestrator | 2026-02-05 05:56:12 - clean up servers 2026-02-05 05:56:12.698230 | orchestrator | 2026-02-05 05:56:12 - clean up keypairs 2026-02-05 05:56:12.720895 | orchestrator | 2026-02-05 05:56:12 - wait for servers to be gone 2026-02-05 05:56:12.761738 | orchestrator | 2026-02-05 05:56:12 - clean up ports 2026-02-05 05:56:12.837515 | orchestrator | 2026-02-05 05:56:12 - clean up volumes 2026-02-05 05:56:12.902388 | orchestrator | 2026-02-05 05:56:12 - disconnect routers 2026-02-05 05:56:12.933492 | orchestrator | 2026-02-05 05:56:12 - clean up subnets 2026-02-05 05:56:12.958685 | orchestrator | 2026-02-05 05:56:12 - clean up networks 2026-02-05 05:56:13.083245 | orchestrator | 2026-02-05 05:56:13 - clean up security groups 2026-02-05 05:56:13.119343 | orchestrator | 2026-02-05 05:56:13 - clean up floating ips 2026-02-05 05:56:13.148542 | orchestrator | 2026-02-05 05:56:13 - clean up routers 2026-02-05 05:56:13.620400 | orchestrator | ok: Runtime: 0:00:01.351593 2026-02-05 05:56:13.624268 | 2026-02-05 05:56:13.624423 | PLAY RECAP 2026-02-05 05:56:13.624549 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-05 05:56:13.624643 | 2026-02-05 05:56:13.745608 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-05 05:56:13.746633 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-05 05:56:14.477350 | 2026-02-05 05:56:14.477536 | PLAY [Base post-fetch] 2026-02-05 05:56:14.492947 | 2026-02-05 05:56:14.493074 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-05 05:56:14.547932 | orchestrator | skipping: Conditional result was False 2026-02-05 05:56:14.555965 | 2026-02-05 05:56:14.556132 | TASK [fetch-output : Set log path for single node] 2026-02-05 05:56:14.601376 | orchestrator | ok 2026-02-05 05:56:14.610514 | 2026-02-05 05:56:14.610702 | LOOP [fetch-output : Ensure local output dirs] 2026-02-05 05:56:15.087948 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/dfd18b6ff29c46d7a0487cf75b178ce7/work/logs" 2026-02-05 05:56:15.359975 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dfd18b6ff29c46d7a0487cf75b178ce7/work/artifacts" 2026-02-05 05:56:15.663541 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dfd18b6ff29c46d7a0487cf75b178ce7/work/docs" 2026-02-05 05:56:15.692414 | 2026-02-05 05:56:15.692642 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-05 05:56:16.641337 | orchestrator | changed: .d..t...... ./ 2026-02-05 05:56:16.641699 | orchestrator | changed: All items complete 2026-02-05 05:56:16.641765 | 2026-02-05 05:56:17.366777 | orchestrator | changed: .d..t...... ./ 2026-02-05 05:56:18.088626 | orchestrator | changed: .d..t...... ./ 2026-02-05 05:56:18.107673 | 2026-02-05 05:56:18.108380 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-05 05:56:18.145723 | orchestrator | skipping: Conditional result was False 2026-02-05 05:56:18.149721 | orchestrator | skipping: Conditional result was False 2026-02-05 05:56:18.157824 | 2026-02-05 05:56:18.157903 | PLAY RECAP 2026-02-05 05:56:18.157957 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-05 05:56:18.157983 | 2026-02-05 05:56:18.279930 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-05 05:56:18.282342 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-05 05:56:19.043361 | 2026-02-05 05:56:19.043526 | PLAY [Base post] 2026-02-05 05:56:19.058119 | 2026-02-05 05:56:19.058265 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-05 05:56:20.061217 | orchestrator | changed 2026-02-05 05:56:20.071590 | 2026-02-05 05:56:20.071735 | PLAY RECAP 2026-02-05 05:56:20.071813 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-05 05:56:20.071888 | 2026-02-05 05:56:20.192214 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-05 05:56:20.194648 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-05 05:56:21.103350 | 2026-02-05 05:56:21.103521 | PLAY [Base post-logs] 2026-02-05 05:56:21.115365 | 2026-02-05 05:56:21.115636 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-05 05:56:21.642102 | localhost | changed 2026-02-05 05:56:21.657322 | 2026-02-05 05:56:21.657539 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-05 05:56:21.685194 | localhost | ok 2026-02-05 05:56:21.689801 | 2026-02-05 05:56:21.689928 | TASK [Set zuul-log-path fact] 2026-02-05 05:56:21.706084 | localhost | ok 2026-02-05 05:56:21.718206 | 2026-02-05 05:56:21.718388 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-05 05:56:21.743534 | localhost | ok 2026-02-05 05:56:21.747076 | 2026-02-05 05:56:21.747178 | TASK [upload-logs : Create log directories] 2026-02-05 05:56:22.241722 | localhost | changed 2026-02-05 05:56:22.244676 | 2026-02-05 05:56:22.244788 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-05 05:56:22.745425 | localhost -> localhost | ok: Runtime: 0:00:00.006999 2026-02-05 05:56:22.749505 | 2026-02-05 05:56:22.749668 | TASK [upload-logs : Upload logs to log server] 2026-02-05 05:56:23.316311 | localhost | Output suppressed because no_log was given 2026-02-05 05:56:23.322381 | 2026-02-05 05:56:23.322692 | LOOP [upload-logs : Compress console log and json output] 2026-02-05 05:56:23.385151 | localhost | skipping: Conditional result was False 2026-02-05 05:56:23.390247 | localhost | skipping: Conditional result was False 2026-02-05 05:56:23.399007 | 2026-02-05 05:56:23.399270 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-05 05:56:23.445147 | localhost | skipping: Conditional result was False 2026-02-05 05:56:23.445723 | 2026-02-05 05:56:23.449144 | localhost | skipping: Conditional result was False 2026-02-05 05:56:23.464300 | 2026-02-05 05:56:23.464538 | LOOP [upload-logs : Upload console log and json output]